Dec 02 10:13:29 crc systemd[1]: Starting Kubernetes Kubelet... Dec 02 10:13:29 crc restorecon[4683]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:13:29 crc restorecon[4683]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:13:29 crc restorecon[4683]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 02 10:13:30 crc kubenswrapper[4711]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 10:13:30 crc kubenswrapper[4711]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 02 10:13:30 crc kubenswrapper[4711]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 10:13:30 crc kubenswrapper[4711]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 10:13:30 crc kubenswrapper[4711]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 02 10:13:30 crc kubenswrapper[4711]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.909241 4711 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912700 4711 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912724 4711 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912731 4711 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912746 4711 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912752 4711 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912758 4711 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912763 4711 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912769 4711 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912775 4711 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912779 4711 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912784 4711 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912789 4711 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912794 4711 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912799 4711 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912803 4711 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912808 4711 feature_gate.go:330] unrecognized feature gate: Example Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912812 4711 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912817 4711 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912821 4711 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912825 4711 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912830 4711 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912836 4711 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912840 4711 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912844 4711 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912849 4711 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912855 4711 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912860 4711 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912865 4711 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912869 4711 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912874 4711 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912878 4711 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912882 4711 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912888 4711 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912893 4711 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912898 4711 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912904 4711 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912908 4711 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912914 4711 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912919 4711 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912923 4711 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912929 4711 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912933 4711 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912937 4711 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912941 4711 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912946 4711 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912981 4711 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912986 4711 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912990 4711 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912994 4711 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.912999 4711 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913003 4711 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913007 4711 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913012 4711 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913016 4711 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913020 4711 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913025 4711 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913029 4711 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913034 4711 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913038 4711 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913043 4711 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913050 4711 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913055 4711 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913060 4711 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913064 4711 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913068 4711 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913072 4711 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913077 4711 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913081 4711 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913085 4711 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913091 4711 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.913097 4711 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913382 4711 flags.go:64] FLAG: --address="0.0.0.0" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913398 4711 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913408 4711 flags.go:64] FLAG: --anonymous-auth="true" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913415 4711 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913422 4711 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913428 4711 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913436 4711 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913442 4711 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913448 4711 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913454 4711 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913460 4711 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913466 4711 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913471 4711 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913476 4711 flags.go:64] FLAG: --cgroup-root="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913482 4711 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913487 4711 flags.go:64] FLAG: --client-ca-file="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913492 4711 flags.go:64] FLAG: --cloud-config="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913497 4711 flags.go:64] FLAG: --cloud-provider="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913502 4711 flags.go:64] FLAG: --cluster-dns="[]" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913508 4711 flags.go:64] FLAG: --cluster-domain="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913513 4711 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913519 4711 flags.go:64] FLAG: --config-dir="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913525 4711 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913531 4711 flags.go:64] FLAG: --container-log-max-files="5" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913538 4711 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913543 4711 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913548 4711 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913554 4711 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913559 4711 flags.go:64] FLAG: --contention-profiling="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913564 4711 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913572 4711 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913577 4711 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913582 4711 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913591 4711 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913596 4711 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913601 4711 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913606 4711 flags.go:64] FLAG: --enable-load-reader="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913611 4711 flags.go:64] FLAG: --enable-server="true" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913616 4711 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913639 4711 flags.go:64] FLAG: --event-burst="100" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913644 4711 flags.go:64] FLAG: --event-qps="50" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913649 4711 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913654 4711 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913659 4711 flags.go:64] FLAG: --eviction-hard="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913666 4711 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913672 4711 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913677 4711 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913682 4711 flags.go:64] FLAG: --eviction-soft="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913688 4711 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913693 4711 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913698 4711 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913704 4711 flags.go:64] FLAG: --experimental-mounter-path="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913709 4711 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913716 4711 flags.go:64] FLAG: --fail-swap-on="true" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913722 4711 flags.go:64] FLAG: --feature-gates="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913728 4711 flags.go:64] FLAG: --file-check-frequency="20s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913734 4711 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913739 4711 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913745 4711 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913750 4711 flags.go:64] FLAG: --healthz-port="10248" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913756 4711 flags.go:64] FLAG: --help="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913761 4711 flags.go:64] FLAG: --hostname-override="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913766 4711 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913772 4711 flags.go:64] FLAG: --http-check-frequency="20s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913777 4711 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913782 4711 flags.go:64] FLAG: --image-credential-provider-config="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913787 4711 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913793 4711 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913798 4711 flags.go:64] FLAG: --image-service-endpoint="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913803 4711 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913808 4711 flags.go:64] FLAG: --kube-api-burst="100" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913813 4711 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913819 4711 flags.go:64] FLAG: --kube-api-qps="50" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913824 4711 flags.go:64] FLAG: --kube-reserved="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913829 4711 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913834 4711 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913840 4711 flags.go:64] FLAG: --kubelet-cgroups="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913845 4711 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913851 4711 flags.go:64] FLAG: --lock-file="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913856 4711 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913861 4711 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913865 4711 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913874 4711 flags.go:64] FLAG: --log-json-split-stream="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913879 4711 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913884 4711 flags.go:64] FLAG: --log-text-split-stream="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913889 4711 flags.go:64] FLAG: --logging-format="text" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913894 4711 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913900 4711 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913905 4711 flags.go:64] FLAG: --manifest-url="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913910 4711 flags.go:64] FLAG: --manifest-url-header="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913918 4711 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913923 4711 flags.go:64] FLAG: --max-open-files="1000000" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913929 4711 flags.go:64] FLAG: --max-pods="110" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913935 4711 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913941 4711 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913947 4711 flags.go:64] FLAG: --memory-manager-policy="None" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913970 4711 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913978 4711 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913984 4711 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.913989 4711 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914003 4711 flags.go:64] FLAG: --node-status-max-images="50" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914008 4711 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914013 4711 flags.go:64] FLAG: --oom-score-adj="-999" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914019 4711 flags.go:64] FLAG: --pod-cidr="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914024 4711 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914033 4711 flags.go:64] FLAG: --pod-manifest-path="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914038 4711 flags.go:64] FLAG: --pod-max-pids="-1" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914043 4711 flags.go:64] FLAG: --pods-per-core="0" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914049 4711 flags.go:64] FLAG: --port="10250" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914054 4711 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914060 4711 flags.go:64] FLAG: --provider-id="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914066 4711 flags.go:64] FLAG: --qos-reserved="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914071 4711 flags.go:64] FLAG: --read-only-port="10255" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914076 4711 flags.go:64] FLAG: --register-node="true" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914081 4711 flags.go:64] FLAG: --register-schedulable="true" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914086 4711 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914095 4711 flags.go:64] FLAG: --registry-burst="10" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914100 4711 flags.go:64] FLAG: --registry-qps="5" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914106 4711 flags.go:64] FLAG: --reserved-cpus="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914111 4711 flags.go:64] FLAG: --reserved-memory="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914118 4711 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914123 4711 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914128 4711 flags.go:64] FLAG: --rotate-certificates="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914133 4711 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914138 4711 flags.go:64] FLAG: --runonce="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914143 4711 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914148 4711 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914154 4711 flags.go:64] FLAG: --seccomp-default="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914159 4711 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914164 4711 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914171 4711 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914177 4711 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914182 4711 flags.go:64] FLAG: --storage-driver-password="root" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914187 4711 flags.go:64] FLAG: --storage-driver-secure="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914193 4711 flags.go:64] FLAG: --storage-driver-table="stats" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914197 4711 flags.go:64] FLAG: --storage-driver-user="root" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914202 4711 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914208 4711 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914213 4711 flags.go:64] FLAG: --system-cgroups="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914218 4711 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914228 4711 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914233 4711 flags.go:64] FLAG: --tls-cert-file="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914238 4711 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914244 4711 flags.go:64] FLAG: --tls-min-version="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914249 4711 flags.go:64] FLAG: --tls-private-key-file="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914254 4711 flags.go:64] FLAG: --topology-manager-policy="none" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914259 4711 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914264 4711 flags.go:64] FLAG: --topology-manager-scope="container" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914269 4711 flags.go:64] FLAG: --v="2" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914276 4711 flags.go:64] FLAG: --version="false" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914282 4711 flags.go:64] FLAG: --vmodule="" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914288 4711 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914294 4711 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914424 4711 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914432 4711 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914437 4711 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914443 4711 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914448 4711 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914453 4711 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914458 4711 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914463 4711 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914467 4711 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914476 4711 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914482 4711 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914486 4711 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914491 4711 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914497 4711 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914501 4711 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914505 4711 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914509 4711 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914513 4711 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914517 4711 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914521 4711 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914529 4711 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914533 4711 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914537 4711 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914542 4711 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914546 4711 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914550 4711 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914554 4711 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914558 4711 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914564 4711 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914569 4711 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914575 4711 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914580 4711 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914586 4711 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914592 4711 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914597 4711 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914602 4711 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914606 4711 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914611 4711 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914615 4711 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914620 4711 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914624 4711 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914631 4711 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914635 4711 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914640 4711 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914644 4711 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914650 4711 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914655 4711 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914661 4711 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914667 4711 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914673 4711 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914677 4711 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914681 4711 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914689 4711 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914694 4711 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914700 4711 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914705 4711 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914709 4711 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914714 4711 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914718 4711 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914722 4711 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914726 4711 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914731 4711 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914735 4711 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914739 4711 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914744 4711 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914748 4711 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914752 4711 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914756 4711 feature_gate.go:330] unrecognized feature gate: Example Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914760 4711 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914764 4711 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.914769 4711 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.914785 4711 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.929854 4711 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.929890 4711 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.929973 4711 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.929982 4711 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.929986 4711 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.929990 4711 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.929994 4711 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.929998 4711 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930002 4711 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930005 4711 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930009 4711 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930013 4711 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930016 4711 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930021 4711 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930024 4711 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930028 4711 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930031 4711 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930035 4711 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930038 4711 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930042 4711 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930045 4711 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930049 4711 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930053 4711 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930056 4711 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930060 4711 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930064 4711 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930067 4711 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930071 4711 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930074 4711 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930078 4711 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930082 4711 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930085 4711 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930089 4711 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930094 4711 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930098 4711 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930103 4711 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930106 4711 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930110 4711 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930114 4711 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930117 4711 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930121 4711 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930124 4711 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930128 4711 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930132 4711 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930136 4711 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930140 4711 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930144 4711 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930149 4711 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930156 4711 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930160 4711 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930164 4711 feature_gate.go:330] unrecognized feature gate: Example Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930169 4711 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930173 4711 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930177 4711 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930182 4711 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930186 4711 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930191 4711 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930195 4711 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930199 4711 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930203 4711 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930206 4711 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930210 4711 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930214 4711 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930218 4711 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930222 4711 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930225 4711 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930229 4711 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930234 4711 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930237 4711 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930241 4711 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930244 4711 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930248 4711 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930253 4711 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.930261 4711 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930375 4711 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930382 4711 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930386 4711 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930390 4711 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930394 4711 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930398 4711 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930402 4711 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930406 4711 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930411 4711 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930415 4711 feature_gate.go:330] unrecognized feature gate: Example Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930419 4711 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930422 4711 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930426 4711 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930429 4711 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930433 4711 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930437 4711 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930442 4711 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930446 4711 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930449 4711 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930454 4711 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930457 4711 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930461 4711 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930464 4711 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930467 4711 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930471 4711 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930475 4711 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930479 4711 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930483 4711 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930486 4711 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930489 4711 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930493 4711 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930497 4711 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930500 4711 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930503 4711 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930507 4711 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930510 4711 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930514 4711 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930519 4711 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930523 4711 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930527 4711 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930532 4711 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930536 4711 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930540 4711 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930543 4711 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930547 4711 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930551 4711 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930555 4711 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930560 4711 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930564 4711 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930568 4711 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930573 4711 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930576 4711 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930581 4711 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930585 4711 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930589 4711 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930593 4711 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930597 4711 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930601 4711 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930605 4711 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930609 4711 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930613 4711 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930617 4711 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930621 4711 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930625 4711 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930630 4711 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930634 4711 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930638 4711 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930642 4711 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930646 4711 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930650 4711 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 10:13:30 crc kubenswrapper[4711]: W1202 10:13:30.930654 4711 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.930661 4711 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.931068 4711 server.go:940] "Client rotation is on, will bootstrap in background" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.934183 4711 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.934272 4711 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.934796 4711 server.go:997] "Starting client certificate rotation" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.934829 4711 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.935048 4711 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-12 03:51:25.543942831 +0000 UTC Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.935172 4711 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 977h37m54.60877445s for next certificate rotation Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.942816 4711 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.945901 4711 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.958833 4711 log.go:25] "Validated CRI v1 runtime API" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.979661 4711 log.go:25] "Validated CRI v1 image API" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.981529 4711 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.984682 4711 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-02-10-09-02-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 02 10:13:30 crc kubenswrapper[4711]: I1202 10:13:30.984714 4711 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.005626 4711 manager.go:217] Machine: {Timestamp:2025-12-02 10:13:31.004245186 +0000 UTC m=+0.713611683 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:587f9aad-9cef-4053-bfa7-cda655f69c36 BootID:3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:73:35:f7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:73:35:f7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:65:16:73 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7b:b9:3d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:17:0b:a8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7b:89:4a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:36:c6:b1:47:3f:1e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:42:62:a3:5a:9d:30 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.005947 4711 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.006123 4711 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.008108 4711 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.008333 4711 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.008375 4711 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.008681 4711 topology_manager.go:138] "Creating topology manager with none policy" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.008697 4711 container_manager_linux.go:303] "Creating device plugin manager" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.008946 4711 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.009018 4711 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.009325 4711 state_mem.go:36] "Initialized new in-memory state store" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.009730 4711 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.013927 4711 kubelet.go:418] "Attempting to sync node with API server" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.014000 4711 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.014039 4711 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.014061 4711 kubelet.go:324] "Adding apiserver pod source" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.014081 4711 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.015807 4711 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.016212 4711 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.017937 4711 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018636 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018670 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018683 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018696 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018715 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018727 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018739 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018758 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018771 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018784 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018802 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018814 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.018845 4711 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 02 10:13:31 crc kubenswrapper[4711]: W1202 10:13:31.018996 4711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.019182 4711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:13:31 crc kubenswrapper[4711]: W1202 10:13:31.019224 4711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.019354 4711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.019526 4711 server.go:1280] "Started kubelet" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.020113 4711 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.020365 4711 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.020703 4711 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.020939 4711 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:31 crc systemd[1]: Started Kubernetes Kubelet. Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.022463 4711 server.go:460] "Adding debug handlers to kubelet server" Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.022288 4711 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.249:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d5e63a27549fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 10:13:31.019471355 +0000 UTC m=+0.728837822,LastTimestamp:2025-12-02 10:13:31.019471355 +0000 UTC m=+0.728837822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.022722 4711 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.022752 4711 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.022780 4711 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 13:44:13.452931853 +0000 UTC Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.022829 4711 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 339h30m42.430105398s for next certificate rotation Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.022900 4711 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.023146 4711 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.023177 4711 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.023232 4711 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 10:13:31 crc kubenswrapper[4711]: W1202 10:13:31.023585 4711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.023648 4711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.025051 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="200ms" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.025364 4711 factory.go:55] Registering systemd factory Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.025409 4711 factory.go:221] Registration of the systemd container factory successfully Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.026069 4711 factory.go:153] Registering CRI-O factory Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.026098 4711 factory.go:221] Registration of the crio container factory successfully Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.026175 4711 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.026208 4711 factory.go:103] Registering Raw factory Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.026228 4711 manager.go:1196] Started watching for new ooms in manager Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.027159 4711 manager.go:319] Starting recovery of all containers Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.036977 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037055 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037088 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037106 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037125 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037155 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037173 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037198 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037222 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037246 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037262 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037281 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037304 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037332 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037348 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037377 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037400 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037417 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037434 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037457 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037474 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037498 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037538 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037556 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037578 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037596 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037623 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037650 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037667 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037691 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037708 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037730 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037755 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037775 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037794 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037820 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037839 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037863 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037880 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037900 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037924 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.037997 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038030 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038048 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038064 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038088 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038105 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038148 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038163 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038177 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038194 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038207 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038230 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038249 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038281 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038301 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038316 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038334 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038348 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038367 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038381 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038393 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038410 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038423 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038459 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038472 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038485 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038500 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038513 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038532 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038546 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038558 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038575 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038589 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038623 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038640 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038653 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038671 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038683 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038701 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038713 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.038991 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039014 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039027 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039043 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039056 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039069 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039086 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039098 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039111 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039128 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039142 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039162 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039174 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.039187 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041331 4711 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041433 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041471 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041492 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041518 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041545 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041571 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041595 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041622 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041648 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041690 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041722 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041754 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041781 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041807 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041837 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041866 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041893 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041918 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.041944 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042093 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042119 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042158 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042183 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042208 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042231 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042260 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042284 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042308 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042331 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042358 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042383 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042408 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042434 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042460 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042483 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042507 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042537 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042562 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042601 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042627 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042654 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042683 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042712 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042740 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042772 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042802 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042829 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042853 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042878 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042905 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042932 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.042992 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043020 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043046 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043097 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043125 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043151 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043177 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043204 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043242 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043273 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043297 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043324 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043350 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043377 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043403 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043430 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043507 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043534 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043560 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043585 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043610 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043634 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043658 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043684 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043711 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043737 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043766 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043793 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043854 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043882 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043908 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.043933 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044016 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044044 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044069 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044095 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044121 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044145 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044169 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044195 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044220 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044247 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044272 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044299 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044326 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044353 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044379 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044405 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044435 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044460 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044485 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044511 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044540 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044566 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044594 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044640 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044666 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044693 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044719 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044745 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044771 4711 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044798 4711 reconstruct.go:97] "Volume reconstruction finished" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.044817 4711 reconciler.go:26] "Reconciler: start to sync state" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.058822 4711 manager.go:324] Recovery completed Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.068013 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.069489 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.069525 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.069535 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.071921 4711 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.071947 4711 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.072069 4711 state_mem.go:36] "Initialized new in-memory state store" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.075090 4711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.076388 4711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.077064 4711 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.077112 4711 kubelet.go:2335] "Starting kubelet main sync loop" Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.077156 4711 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 02 10:13:31 crc kubenswrapper[4711]: W1202 10:13:31.079656 4711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.079714 4711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.079865 4711 policy_none.go:49] "None policy: Start" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.080657 4711 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.080686 4711 state_mem.go:35] "Initializing new in-memory state store" Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.124009 4711 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.146832 4711 manager.go:334] "Starting Device Plugin manager" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.146884 4711 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.146897 4711 server.go:79] "Starting device plugin registration server" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.147316 4711 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.147335 4711 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.147434 4711 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.147513 4711 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.147525 4711 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.155612 4711 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.177452 4711 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.177553 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.178619 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.178665 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.178677 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.178863 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.179148 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.179217 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.179941 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.179984 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.179995 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.180401 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.180427 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.181099 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.181142 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.183411 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.183498 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.184962 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.184998 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.185024 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.185056 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.185097 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.185110 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.185293 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.185463 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.185523 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.186193 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.186225 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.186239 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.186356 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.186386 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.186404 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.186415 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.186504 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.186551 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.187052 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.187091 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.187107 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.187246 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.187270 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.187274 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.187348 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.187374 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.187850 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.187878 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.187889 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.226077 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="400ms" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.247899 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.247919 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248106 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248141 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248176 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248215 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248264 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248281 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248364 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248426 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248463 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248522 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248543 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248584 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248606 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.248701 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.249535 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.249567 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.249576 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.249600 4711 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.249921 4711 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.249:6443: connect: connection refused" node="crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349515 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349589 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349634 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349660 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349681 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349703 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349732 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349754 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349790 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349808 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349816 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349827 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349847 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349863 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349883 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349899 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349916 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.349969 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.350028 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.350068 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.350100 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.350082 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.350132 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.350149 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.350164 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.350196 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.350205 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.350226 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.350241 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.350259 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.450050 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.451345 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.451397 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.451413 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.451445 4711 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.452034 4711 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.249:6443: connect: connection refused" node="crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.510205 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.525565 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: W1202 10:13:31.539200 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c715a3d8125c17de491f86099d5739071a9f4e8d0851fa2c10a90af79e1b9b98 WatchSource:0}: Error finding container c715a3d8125c17de491f86099d5739071a9f4e8d0851fa2c10a90af79e1b9b98: Status 404 returned error can't find the container with id c715a3d8125c17de491f86099d5739071a9f4e8d0851fa2c10a90af79e1b9b98 Dec 02 10:13:31 crc kubenswrapper[4711]: W1202 10:13:31.542705 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-5a268b32ce314970e794d5e9febbf0ca809f4d7c38de0c0a76a65bd6d674fad9 WatchSource:0}: Error finding container 5a268b32ce314970e794d5e9febbf0ca809f4d7c38de0c0a76a65bd6d674fad9: Status 404 returned error can't find the container with id 5a268b32ce314970e794d5e9febbf0ca809f4d7c38de0c0a76a65bd6d674fad9 Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.549311 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: W1202 10:13:31.562575 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ad53f96e482343cf2e760f43ec5dabea60ef8bda4275411fa253a7d002cc9ba6 WatchSource:0}: Error finding container ad53f96e482343cf2e760f43ec5dabea60ef8bda4275411fa253a7d002cc9ba6: Status 404 returned error can't find the container with id ad53f96e482343cf2e760f43ec5dabea60ef8bda4275411fa253a7d002cc9ba6 Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.563445 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.569716 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:13:31 crc kubenswrapper[4711]: W1202 10:13:31.575122 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4ff72cfb3b1684a2f5d34b6f69eda341b321296054f7cb8647edbc2c708aef6c WatchSource:0}: Error finding container 4ff72cfb3b1684a2f5d34b6f69eda341b321296054f7cb8647edbc2c708aef6c: Status 404 returned error can't find the container with id 4ff72cfb3b1684a2f5d34b6f69eda341b321296054f7cb8647edbc2c708aef6c Dec 02 10:13:31 crc kubenswrapper[4711]: W1202 10:13:31.606344 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-031455cf82810d528f7c212ed48471445b8687090580fca3fdc506dc58ea2c31 WatchSource:0}: Error finding container 031455cf82810d528f7c212ed48471445b8687090580fca3fdc506dc58ea2c31: Status 404 returned error can't find the container with id 031455cf82810d528f7c212ed48471445b8687090580fca3fdc506dc58ea2c31 Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.627378 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="800ms" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.854102 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.856831 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.856871 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.856886 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:31 crc kubenswrapper[4711]: I1202 10:13:31.856912 4711 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.857339 4711 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.249:6443: connect: connection refused" node="crc" Dec 02 10:13:31 crc kubenswrapper[4711]: W1202 10:13:31.923075 4711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:31 crc kubenswrapper[4711]: E1202 10:13:31.923199 4711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.022799 4711 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.085631 4711 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616" exitCode=0 Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.085675 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616"} Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.085865 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"031455cf82810d528f7c212ed48471445b8687090580fca3fdc506dc58ea2c31"} Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.086068 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.087733 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.087765 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.087774 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.088614 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4"} Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.088653 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4ff72cfb3b1684a2f5d34b6f69eda341b321296054f7cb8647edbc2c708aef6c"} Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.090206 4711 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab" exitCode=0 Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.090261 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab"} Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.090298 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ad53f96e482343cf2e760f43ec5dabea60ef8bda4275411fa253a7d002cc9ba6"} Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.090413 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.091197 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.091223 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.091233 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.092626 4711 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="53842f72a9e6ad0de022a528398fc7e9662384e5921b7ce01efc257084710a27" exitCode=0 Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.092681 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"53842f72a9e6ad0de022a528398fc7e9662384e5921b7ce01efc257084710a27"} Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.092712 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5a268b32ce314970e794d5e9febbf0ca809f4d7c38de0c0a76a65bd6d674fad9"} Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.092808 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.093743 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.093773 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.093783 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.094832 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.094865 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400"} Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.094846 4711 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400" exitCode=0 Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.094931 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.094992 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c715a3d8125c17de491f86099d5739071a9f4e8d0851fa2c10a90af79e1b9b98"} Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.096053 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.096076 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.096085 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.096308 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.096368 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.096381 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:32 crc kubenswrapper[4711]: W1202 10:13:32.114448 4711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:32 crc kubenswrapper[4711]: E1202 10:13:32.114543 4711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:13:32 crc kubenswrapper[4711]: E1202 10:13:32.283524 4711 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.249:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d5e63a27549fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 10:13:31.019471355 +0000 UTC m=+0.728837822,LastTimestamp:2025-12-02 10:13:31.019471355 +0000 UTC m=+0.728837822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 10:13:32 crc kubenswrapper[4711]: E1202 10:13:32.428899 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="1.6s" Dec 02 10:13:32 crc kubenswrapper[4711]: W1202 10:13:32.461130 4711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:32 crc kubenswrapper[4711]: E1202 10:13:32.461357 4711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.657678 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.658787 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.658828 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.658839 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:32 crc kubenswrapper[4711]: I1202 10:13:32.658865 4711 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 10:13:32 crc kubenswrapper[4711]: E1202 10:13:32.659350 4711 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.249:6443: connect: connection refused" node="crc" Dec 02 10:13:32 crc kubenswrapper[4711]: W1202 10:13:32.666135 4711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:32 crc kubenswrapper[4711]: E1202 10:13:32.666229 4711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.022543 4711 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.108546 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31"} Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.108608 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8"} Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.108622 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453"} Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.108724 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.109527 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.109560 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.109570 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.111170 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181"} Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.111232 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7"} Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.111250 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea"} Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.111202 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.112100 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.112133 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.112146 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.114313 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e"} Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.114347 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e"} Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.114360 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6"} Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.117178 4711 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f64950403a18cc32d780be0336b27d3a3c27d1b4fd80abb94bd9cbf181ecfc56" exitCode=0 Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.117246 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f64950403a18cc32d780be0336b27d3a3c27d1b4fd80abb94bd9cbf181ecfc56"} Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.117379 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.118231 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.118264 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.118276 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.118724 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"283095cee32f35a91fa0b5cf45589e99f36719c211a1d5890567377b23f2b33b"} Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.118770 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.119497 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.119527 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:33 crc kubenswrapper[4711]: I1202 10:13:33.119536 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:33 crc kubenswrapper[4711]: W1202 10:13:33.629093 4711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:33 crc kubenswrapper[4711]: E1202 10:13:33.629195 4711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.021252 4711 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Dec 02 10:13:34 crc kubenswrapper[4711]: E1202 10:13:34.030141 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="3.2s" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.123749 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe"} Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.123791 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719"} Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.123844 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.124660 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.124689 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.124701 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.159431 4711 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5b31452bd3870e1a14d4f9224bc3c80b225efea8e5f9d362c79eb5d24a275860" exitCode=0 Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.159534 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5b31452bd3870e1a14d4f9224bc3c80b225efea8e5f9d362c79eb5d24a275860"} Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.159740 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.160178 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.159673 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.161056 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.161083 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.161093 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.161160 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.161171 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.161183 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.162451 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.162477 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.162488 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.259840 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.260900 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.260937 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.260968 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:34 crc kubenswrapper[4711]: I1202 10:13:34.260995 4711 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 10:13:34 crc kubenswrapper[4711]: E1202 10:13:34.261410 4711 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.249:6443: connect: connection refused" node="crc" Dec 02 10:13:35 crc kubenswrapper[4711]: I1202 10:13:35.164026 4711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:13:35 crc kubenswrapper[4711]: I1202 10:13:35.164082 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:35 crc kubenswrapper[4711]: I1202 10:13:35.164660 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a7131b78d6a9faa52aabc4226ec07a02eb614d8ccc3e01f5642253310e8cf017"} Dec 02 10:13:35 crc kubenswrapper[4711]: I1202 10:13:35.164702 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b4f5762c74d517129793a7d1acf457aadf331bb3d7491e15f4e5c1442522d947"} Dec 02 10:13:35 crc kubenswrapper[4711]: I1202 10:13:35.164717 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d86f8309e684edee2d9bcccea8a243aec2fd22a960fa88b380b4cd81c5397b3e"} Dec 02 10:13:35 crc kubenswrapper[4711]: I1202 10:13:35.164728 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"afc1a270a474908b3dd4e13fb4c5f885ea43ab9eaef7763b9398b9624a56adcc"} Dec 02 10:13:35 crc kubenswrapper[4711]: I1202 10:13:35.165010 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:35 crc kubenswrapper[4711]: I1202 10:13:35.165037 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:35 crc kubenswrapper[4711]: I1202 10:13:35.165050 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:35 crc kubenswrapper[4711]: I1202 10:13:35.996199 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:36 crc kubenswrapper[4711]: I1202 10:13:36.169421 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:36 crc kubenswrapper[4711]: I1202 10:13:36.172520 4711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:13:36 crc kubenswrapper[4711]: I1202 10:13:36.173274 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:36 crc kubenswrapper[4711]: I1202 10:13:36.173300 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:36 crc kubenswrapper[4711]: I1202 10:13:36.172534 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6c5d72699a62ad1f54495a7388799d20e0d38a042c5772d381c64f0cc450da80"} Dec 02 10:13:36 crc kubenswrapper[4711]: I1202 10:13:36.174485 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:36 crc kubenswrapper[4711]: I1202 10:13:36.174528 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:36 crc kubenswrapper[4711]: I1202 10:13:36.174539 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:36 crc kubenswrapper[4711]: I1202 10:13:36.174531 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:36 crc kubenswrapper[4711]: I1202 10:13:36.174578 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:36 crc kubenswrapper[4711]: I1202 10:13:36.174600 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.175496 4711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.175573 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.175497 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.177273 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.177309 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.177320 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.177346 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.177366 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.177377 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.221258 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.221432 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.222911 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.222944 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.222976 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.226729 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.439715 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.439907 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.441107 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.441170 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.441192 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.461901 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.463401 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.463464 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.463482 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:37 crc kubenswrapper[4711]: I1202 10:13:37.463513 4711 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 10:13:38 crc kubenswrapper[4711]: I1202 10:13:38.178117 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:38 crc kubenswrapper[4711]: I1202 10:13:38.178196 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:38 crc kubenswrapper[4711]: I1202 10:13:38.179058 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:38 crc kubenswrapper[4711]: I1202 10:13:38.179105 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:38 crc kubenswrapper[4711]: I1202 10:13:38.179118 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:38 crc kubenswrapper[4711]: I1202 10:13:38.333014 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 02 10:13:38 crc kubenswrapper[4711]: I1202 10:13:38.333237 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:38 crc kubenswrapper[4711]: I1202 10:13:38.334633 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:38 crc kubenswrapper[4711]: I1202 10:13:38.334680 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:38 crc kubenswrapper[4711]: I1202 10:13:38.334690 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:38 crc kubenswrapper[4711]: I1202 10:13:38.388311 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:39 crc kubenswrapper[4711]: I1202 10:13:39.180406 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:39 crc kubenswrapper[4711]: I1202 10:13:39.181477 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:39 crc kubenswrapper[4711]: I1202 10:13:39.181523 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:39 crc kubenswrapper[4711]: I1202 10:13:39.181534 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:39 crc kubenswrapper[4711]: I1202 10:13:39.362903 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:39 crc kubenswrapper[4711]: I1202 10:13:39.363286 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:39 crc kubenswrapper[4711]: I1202 10:13:39.365324 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:39 crc kubenswrapper[4711]: I1202 10:13:39.365450 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:39 crc kubenswrapper[4711]: I1202 10:13:39.365465 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:40 crc kubenswrapper[4711]: I1202 10:13:40.182491 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:40 crc kubenswrapper[4711]: I1202 10:13:40.184034 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:40 crc kubenswrapper[4711]: I1202 10:13:40.184097 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:40 crc kubenswrapper[4711]: I1202 10:13:40.184122 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:41 crc kubenswrapper[4711]: E1202 10:13:41.155768 4711 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 02 10:13:41 crc kubenswrapper[4711]: I1202 10:13:41.657792 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:41 crc kubenswrapper[4711]: I1202 10:13:41.658153 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:41 crc kubenswrapper[4711]: I1202 10:13:41.660333 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:41 crc kubenswrapper[4711]: I1202 10:13:41.660396 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:41 crc kubenswrapper[4711]: I1202 10:13:41.660410 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:42 crc kubenswrapper[4711]: I1202 10:13:42.795690 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:42 crc kubenswrapper[4711]: I1202 10:13:42.796191 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:42 crc kubenswrapper[4711]: I1202 10:13:42.824703 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:42 crc kubenswrapper[4711]: I1202 10:13:42.824771 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:42 crc kubenswrapper[4711]: I1202 10:13:42.824784 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:44 crc kubenswrapper[4711]: W1202 10:13:44.684484 4711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 02 10:13:44 crc kubenswrapper[4711]: I1202 10:13:44.684839 4711 trace.go:236] Trace[1949715633]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 10:13:34.682) (total time: 10002ms): Dec 02 10:13:44 crc kubenswrapper[4711]: Trace[1949715633]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:13:44.684) Dec 02 10:13:44 crc kubenswrapper[4711]: Trace[1949715633]: [10.002045777s] [10.002045777s] END Dec 02 10:13:44 crc kubenswrapper[4711]: E1202 10:13:44.684917 4711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 02 10:13:44 crc kubenswrapper[4711]: W1202 10:13:44.876522 4711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 02 10:13:44 crc kubenswrapper[4711]: I1202 10:13:44.876621 4711 trace.go:236] Trace[364158281]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 10:13:34.874) (total time: 10001ms): Dec 02 10:13:44 crc kubenswrapper[4711]: Trace[364158281]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:13:44.876) Dec 02 10:13:44 crc kubenswrapper[4711]: Trace[364158281]: [10.001945228s] [10.001945228s] END Dec 02 10:13:44 crc kubenswrapper[4711]: E1202 10:13:44.876649 4711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.023294 4711 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.055504 4711 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.055684 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.063428 4711 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.063512 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.202193 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.204259 4711 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe" exitCode=255 Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.204328 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe"} Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.204546 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.205458 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.205490 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.205500 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.206081 4711 scope.go:117] "RemoveContainer" containerID="e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.796656 4711 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.796764 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.957661 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.958026 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.959548 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.959574 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.959584 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:45 crc kubenswrapper[4711]: I1202 10:13:45.975438 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.002097 4711 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]log ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]etcd ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/generic-apiserver-start-informers ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/priority-and-fairness-filter ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/start-apiextensions-informers ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/start-apiextensions-controllers ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/crd-informer-synced ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/start-system-namespaces-controller ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 02 10:13:46 crc kubenswrapper[4711]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 02 10:13:46 crc kubenswrapper[4711]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/bootstrap-controller ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/start-kube-aggregator-informers ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/apiservice-registration-controller ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/apiservice-discovery-controller ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]autoregister-completion ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/apiservice-openapi-controller ok Dec 02 10:13:46 crc kubenswrapper[4711]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 02 10:13:46 crc kubenswrapper[4711]: livez check failed Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.002236 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.208080 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.209736 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf"} Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.209895 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.209912 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.210734 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.210788 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.210818 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.210866 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.210881 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.210889 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:46 crc kubenswrapper[4711]: I1202 10:13:46.222985 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 02 10:13:47 crc kubenswrapper[4711]: I1202 10:13:47.214118 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:47 crc kubenswrapper[4711]: I1202 10:13:47.215398 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:47 crc kubenswrapper[4711]: I1202 10:13:47.215456 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:47 crc kubenswrapper[4711]: I1202 10:13:47.215468 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:48 crc kubenswrapper[4711]: I1202 10:13:48.873997 4711 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 02 10:13:48 crc kubenswrapper[4711]: I1202 10:13:48.874656 4711 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 02 10:13:49 crc kubenswrapper[4711]: I1202 10:13:49.363324 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:49 crc kubenswrapper[4711]: I1202 10:13:49.365445 4711 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:13:49 crc kubenswrapper[4711]: I1202 10:13:49.380370 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:49 crc kubenswrapper[4711]: I1202 10:13:49.380501 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:49 crc kubenswrapper[4711]: I1202 10:13:49.380529 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:50 crc kubenswrapper[4711]: E1202 10:13:50.056921 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.096787 4711 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.096832 4711 trace.go:236] Trace[421779659]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 10:13:39.601) (total time: 10495ms): Dec 02 10:13:50 crc kubenswrapper[4711]: Trace[421779659]: ---"Objects listed" error: 10495ms (10:13:50.096) Dec 02 10:13:50 crc kubenswrapper[4711]: Trace[421779659]: [10.495265739s] [10.495265739s] END Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.096883 4711 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.098328 4711 trace.go:236] Trace[85026186]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 10:13:35.459) (total time: 14638ms): Dec 02 10:13:50 crc kubenswrapper[4711]: Trace[85026186]: ---"Objects listed" error: 14638ms (10:13:50.098) Dec 02 10:13:50 crc kubenswrapper[4711]: Trace[85026186]: [14.638359178s] [14.638359178s] END Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.098358 4711 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.106094 4711 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.106224 4711 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.107501 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.107544 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.107571 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.107590 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.107609 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:50Z","lastTransitionTime":"2025-12-02T10:13:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 02 10:13:50 crc kubenswrapper[4711]: E1202 10:13:50.220360 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.226886 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.226926 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.226935 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.226971 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.226984 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:50Z","lastTransitionTime":"2025-12-02T10:13:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 02 10:13:50 crc kubenswrapper[4711]: E1202 10:13:50.251678 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.299338 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.299378 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.299390 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.299409 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.299420 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:50Z","lastTransitionTime":"2025-12-02T10:13:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 02 10:13:50 crc kubenswrapper[4711]: E1202 10:13:50.408542 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.416712 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.416756 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.416769 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.416798 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.416811 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:50Z","lastTransitionTime":"2025-12-02T10:13:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 02 10:13:50 crc kubenswrapper[4711]: E1202 10:13:50.450474 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.453981 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.454007 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.454017 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.454034 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.454045 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:50Z","lastTransitionTime":"2025-12-02T10:13:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 02 10:13:50 crc kubenswrapper[4711]: E1202 10:13:50.470964 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:50Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:50 crc kubenswrapper[4711]: E1202 10:13:50.471571 4711 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.474116 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.474154 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.474164 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.474183 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.474193 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:50Z","lastTransitionTime":"2025-12-02T10:13:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.576161 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.576220 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.576233 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.576255 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.576267 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:50Z","lastTransitionTime":"2025-12-02T10:13:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.678724 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.678761 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.678770 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.678788 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.678798 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:50Z","lastTransitionTime":"2025-12-02T10:13:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.781339 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.781369 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.781376 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.781392 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.781400 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:50Z","lastTransitionTime":"2025-12-02T10:13:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.883887 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.883925 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.883934 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.883966 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.883976 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:50Z","lastTransitionTime":"2025-12-02T10:13:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.987338 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.987378 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.987393 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.987412 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:50 crc kubenswrapper[4711]: I1202 10:13:50.987425 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:50Z","lastTransitionTime":"2025-12-02T10:13:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.002078 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.005985 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.031333 4711 apiserver.go:52] "Watching apiserver" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.041301 4711 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.041756 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-hcx25","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.042344 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.042421 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.042507 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.042627 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.042695 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.042828 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.043052 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.043113 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hcx25" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.043215 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.043385 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.044022 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.044743 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.045384 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.045596 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.045762 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.045906 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.046085 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.046192 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.046303 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.046693 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.046836 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.047892 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.068902 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.089538 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.089608 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.089649 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.089663 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.089672 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:51Z","lastTransitionTime":"2025-12-02T10:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.108451 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.124210 4711 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127206 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127270 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127301 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127326 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127349 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127372 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127393 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127415 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127436 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127456 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127490 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127513 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127532 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127551 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127572 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127592 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127611 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127633 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127655 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127675 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127720 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127742 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127763 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127782 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127787 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127802 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127831 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127851 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127878 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127899 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127920 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127940 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127976 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.127995 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128038 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128045 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128059 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128080 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128082 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128107 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128153 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128180 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128206 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128232 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128273 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128320 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128349 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128360 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128383 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128406 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128434 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128456 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128476 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128497 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128554 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128605 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128626 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128647 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128668 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128681 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128690 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128717 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128741 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128765 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128787 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128812 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128810 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128837 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128858 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128881 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128908 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128929 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128966 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128992 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.128997 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129013 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129032 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129034 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129091 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129114 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129132 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129152 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129170 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129186 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129205 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129240 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129260 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129278 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129299 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129304 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129318 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129336 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129354 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129370 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129388 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129404 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129427 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129444 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129466 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129491 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129495 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129511 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129542 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129568 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129586 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129601 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129625 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129641 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129661 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129676 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129692 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129709 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129726 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129742 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129757 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129772 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129791 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129805 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129820 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129836 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129851 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129867 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129892 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129910 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129926 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129942 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129974 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129990 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130008 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130025 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130041 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130057 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130073 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130089 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130107 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130125 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130141 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130158 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130176 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130191 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130244 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130262 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130279 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130302 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130320 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130339 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130360 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130379 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130396 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130414 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130430 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130566 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130595 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130611 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130634 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130658 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130708 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130735 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130754 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130773 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130792 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130808 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130826 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130842 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130858 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130875 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130899 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130916 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130938 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130970 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130987 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131002 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131020 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131038 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131055 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131072 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131090 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131107 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131126 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131144 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131159 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131174 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131201 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131226 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131249 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131276 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131307 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131324 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131341 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131358 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131373 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131390 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131413 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131430 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131449 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131465 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131483 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131501 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131518 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131535 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131551 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131570 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131586 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131602 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131618 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131689 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131714 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131734 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131750 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131769 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131791 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131840 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97mvs\" (UniqueName: \"kubernetes.io/projected/7d542278-a5d9-41cd-b125-774fc4cbdb1f-kube-api-access-97mvs\") pod \"node-resolver-hcx25\" (UID: \"7d542278-a5d9-41cd-b125-774fc4cbdb1f\") " pod="openshift-dns/node-resolver-hcx25" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131860 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131890 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131909 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131929 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132622 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132715 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132743 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132762 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132782 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7d542278-a5d9-41cd-b125-774fc4cbdb1f-hosts-file\") pod \"node-resolver-hcx25\" (UID: \"7d542278-a5d9-41cd-b125-774fc4cbdb1f\") " pod="openshift-dns/node-resolver-hcx25" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132840 4711 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132851 4711 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132861 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132872 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132885 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132895 4711 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132905 4711 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132914 4711 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132924 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132935 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129674 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.129897 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130047 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130355 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130363 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130496 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130531 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130655 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130681 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.130851 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131063 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131249 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131255 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.155584 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131313 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131473 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131648 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131714 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131813 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131859 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131890 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.131929 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132036 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132338 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132368 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132814 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132807 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132833 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.132914 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133074 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133075 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133172 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133218 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133313 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133379 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133445 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133447 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133474 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133641 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133660 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133690 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.133838 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.137101 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.137540 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.138154 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.138246 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.138445 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.138625 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.138866 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.138925 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.139016 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.139022 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.139196 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.139281 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.139342 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.139366 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.139561 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.141215 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.141488 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.141734 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.141930 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.142118 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.142257 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.142560 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.142686 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.142750 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.143050 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.143303 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.143511 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.143672 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.143755 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.143827 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.143926 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.144216 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.144343 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.147053 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.147627 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.151312 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.151662 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.151760 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.151831 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.151846 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.151914 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.151914 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.152070 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.152080 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.152323 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.152597 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.152739 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.152781 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.152900 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.152929 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.153002 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.153561 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.153653 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.154349 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.154717 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.155011 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.155400 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.155782 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.172073 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.172440 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.172916 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.173184 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.173366 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.173556 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.173725 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.173899 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.174120 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.174316 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.174566 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.175173 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.182511 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.182615 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.182781 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.183010 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.183229 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.183409 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.183626 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.183671 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.183680 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.183738 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.183932 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.184179 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.184388 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.188506 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.192656 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.188684 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.188802 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.189111 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.189216 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.189451 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.189486 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.189669 4711 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.189670 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.189764 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.190002 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.190136 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.190260 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.190086 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.192209 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.192385 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.192733 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.190466 4711 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.192850 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.192987 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.193133 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.193310 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.193323 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.193506 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.193590 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.193802 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.189075 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.193919 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.194251 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.194246 4711 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.194322 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.190995 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.194417 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:51.694335701 +0000 UTC m=+21.403702228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.194566 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.194754 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.194977 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.195174 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.195361 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.195491 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.195686 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.196272 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.196730 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.196816 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:51.696788858 +0000 UTC m=+21.406155305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.197058 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.197254 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.197447 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.197626 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.197900 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.198228 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:13:51.698214727 +0000 UTC m=+21.407581164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.198404 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.198560 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.198709 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.198859 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.199026 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.199339 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.199440 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.199516 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.202480 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.204167 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.204190 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.204198 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.204211 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.204219 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:51Z","lastTransitionTime":"2025-12-02T10:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.207798 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.208012 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.208013 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.211055 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.214353 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.229793 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.230574 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233460 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97mvs\" (UniqueName: \"kubernetes.io/projected/7d542278-a5d9-41cd-b125-774fc4cbdb1f-kube-api-access-97mvs\") pod \"node-resolver-hcx25\" (UID: \"7d542278-a5d9-41cd-b125-774fc4cbdb1f\") " pod="openshift-dns/node-resolver-hcx25" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233504 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233537 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233574 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7d542278-a5d9-41cd-b125-774fc4cbdb1f-hosts-file\") pod \"node-resolver-hcx25\" (UID: \"7d542278-a5d9-41cd-b125-774fc4cbdb1f\") " pod="openshift-dns/node-resolver-hcx25" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233640 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233653 4711 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233663 4711 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233672 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233681 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233692 4711 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233701 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233709 4711 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233722 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233734 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233745 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233758 4711 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233773 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233784 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233793 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233801 4711 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233842 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233851 4711 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233859 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233870 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233880 4711 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233908 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233921 4711 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.233962 4711 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.236283 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.236582 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.236698 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7d542278-a5d9-41cd-b125-774fc4cbdb1f-hosts-file\") pod \"node-resolver-hcx25\" (UID: \"7d542278-a5d9-41cd-b125-774fc4cbdb1f\") " pod="openshift-dns/node-resolver-hcx25" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.239931 4711 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240037 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240048 4711 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240060 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240080 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240090 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240100 4711 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240114 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240128 4711 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240137 4711 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240146 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240155 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240167 4711 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240176 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240184 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240195 4711 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240205 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240214 4711 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240222 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240233 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240245 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240254 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240272 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240284 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240293 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240302 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240314 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240323 4711 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240332 4711 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240341 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240354 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240363 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240372 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240381 4711 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240392 4711 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240400 4711 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240409 4711 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240417 4711 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240427 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240436 4711 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240445 4711 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240455 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240465 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240473 4711 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240482 4711 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240493 4711 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240538 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240548 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240555 4711 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240567 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240576 4711 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240584 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240595 4711 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240603 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240612 4711 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240621 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240633 4711 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240642 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240650 4711 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240659 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240670 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240680 4711 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240688 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240698 4711 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240709 4711 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240717 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240726 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240737 4711 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240745 4711 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240754 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240762 4711 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240773 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240783 4711 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240791 4711 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240800 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240810 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240818 4711 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240828 4711 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240836 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240846 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240855 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240864 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240876 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240885 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240893 4711 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240902 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240912 4711 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240921 4711 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240930 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240938 4711 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240963 4711 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240972 4711 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240982 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.240994 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241004 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241013 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241022 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241032 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241041 4711 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241050 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241058 4711 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241069 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241078 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241086 4711 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241096 4711 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241112 4711 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241120 4711 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241129 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241139 4711 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241148 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241157 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241166 4711 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241177 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241186 4711 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241196 4711 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241205 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241216 4711 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241225 4711 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241234 4711 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241243 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241253 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241262 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241271 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241281 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241289 4711 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241298 4711 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241306 4711 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241318 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241327 4711 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241337 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241348 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241363 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241373 4711 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241384 4711 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241398 4711 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241409 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241419 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241431 4711 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241445 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241456 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241467 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241478 4711 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241490 4711 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241499 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241507 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241516 4711 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241527 4711 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241536 4711 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241544 4711 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241555 4711 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241563 4711 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241571 4711 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241579 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241590 4711 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241598 4711 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.241606 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.248307 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.265178 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.265209 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.265222 4711 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.265287 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:51.765265541 +0000 UTC m=+21.474632008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.272490 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.272610 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97mvs\" (UniqueName: \"kubernetes.io/projected/7d542278-a5d9-41cd-b125-774fc4cbdb1f-kube-api-access-97mvs\") pod \"node-resolver-hcx25\" (UID: \"7d542278-a5d9-41cd-b125-774fc4cbdb1f\") " pod="openshift-dns/node-resolver-hcx25" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.277435 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.280772 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.280816 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.280834 4711 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.280900 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:51.780876476 +0000 UTC m=+21.490242923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.281490 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.282687 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.292918 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.300659 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.302611 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.306569 4711 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.309591 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.320509 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.321806 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.321834 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.321843 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.321859 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.321867 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:51Z","lastTransitionTime":"2025-12-02T10:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.326925 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.342539 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.342748 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.342853 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.342975 4711 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.342559 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.355772 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.355788 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.362695 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.368188 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: W1202 10:13:51.369318 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-fe37c56af035883a91c80b97ae87fd0503407c2e07e3d8c7c562ca825fc13195 WatchSource:0}: Error finding container fe37c56af035883a91c80b97ae87fd0503407c2e07e3d8c7c562ca825fc13195: Status 404 returned error can't find the container with id fe37c56af035883a91c80b97ae87fd0503407c2e07e3d8c7c562ca825fc13195 Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.379678 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.386853 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.389451 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hcx25" Dec 02 10:13:51 crc kubenswrapper[4711]: W1202 10:13:51.399623 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ed5c832e38af9369a496965c8580be25b8bb432718d082b8dfde7dea2be00ca3 WatchSource:0}: Error finding container ed5c832e38af9369a496965c8580be25b8bb432718d082b8dfde7dea2be00ca3: Status 404 returned error can't find the container with id ed5c832e38af9369a496965c8580be25b8bb432718d082b8dfde7dea2be00ca3 Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.406247 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.416362 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.425175 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.425225 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.425237 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.425255 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.425267 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:51Z","lastTransitionTime":"2025-12-02T10:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.427816 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.455233 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.470297 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.485507 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.493149 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.529534 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.529571 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.529581 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.529599 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.529609 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:51Z","lastTransitionTime":"2025-12-02T10:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.631833 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.631895 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.631907 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.631922 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.631986 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:51Z","lastTransitionTime":"2025-12-02T10:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.736344 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.736387 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.736397 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.736464 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.736478 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:51Z","lastTransitionTime":"2025-12-02T10:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.759505 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.759579 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.759629 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.759700 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:13:52.759670124 +0000 UTC m=+22.469036571 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.759704 4711 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.759773 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:52.759767176 +0000 UTC m=+22.469133623 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.759810 4711 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.759915 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:52.759890979 +0000 UTC m=+22.469257496 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.838656 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.838689 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.838699 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.838720 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.838731 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:51Z","lastTransitionTime":"2025-12-02T10:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.860469 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.860516 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.860641 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.860657 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.860668 4711 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.860668 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.860691 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.860702 4711 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.860710 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:52.860697072 +0000 UTC m=+22.570063519 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:51 crc kubenswrapper[4711]: E1202 10:13:51.860738 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:52.860725813 +0000 UTC m=+22.570092260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.952064 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.952095 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.952103 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.952117 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:51 crc kubenswrapper[4711]: I1202 10:13:51.952126 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:51Z","lastTransitionTime":"2025-12-02T10:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.026850 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-5xjmc"] Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.027470 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-9b9cn"] Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.027721 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.028057 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: W1202 10:13:52.033170 4711 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.033240 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:13:52 crc kubenswrapper[4711]: W1202 10:13:52.033285 4711 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.033297 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:13:52 crc kubenswrapper[4711]: W1202 10:13:52.033332 4711 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.033344 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:13:52 crc kubenswrapper[4711]: W1202 10:13:52.033469 4711 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.033486 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:13:52 crc kubenswrapper[4711]: W1202 10:13:52.033527 4711 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.033539 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:13:52 crc kubenswrapper[4711]: W1202 10:13:52.033565 4711 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.033575 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:13:52 crc kubenswrapper[4711]: W1202 10:13:52.033707 4711 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.033723 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.034216 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4qrj7"] Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.034567 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: W1202 10:13:52.034795 4711 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.034823 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:13:52 crc kubenswrapper[4711]: W1202 10:13:52.035537 4711 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.035665 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:13:52 crc kubenswrapper[4711]: W1202 10:13:52.035781 4711 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.035806 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.036205 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.036525 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.047719 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062168 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-multus-socket-dir-parent\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062214 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-var-lib-cni-multus\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062242 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-cnibin\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062309 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062354 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2fab88a2-3875-44a4-a926-7c76836b51b8-cni-binary-copy\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062389 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-run-k8s-cni-cncf-io\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062404 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-run-netns\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062421 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-run-multus-certs\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062454 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjkh4\" (UniqueName: \"kubernetes.io/projected/2fab88a2-3875-44a4-a926-7c76836b51b8-kube-api-access-cjkh4\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062476 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0641e884-c845-499c-9ce6-0c4f1a893b5a-rootfs\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062522 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-var-lib-cni-bin\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062541 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-hostroot\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062563 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0641e884-c845-499c-9ce6-0c4f1a893b5a-proxy-tls\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062579 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-system-cni-dir\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062600 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-system-cni-dir\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062616 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-os-release\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062636 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2fab88a2-3875-44a4-a926-7c76836b51b8-multus-daemon-config\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062652 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-etc-kubernetes\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062670 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwpz2\" (UniqueName: \"kubernetes.io/projected/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-kube-api-access-fwpz2\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062708 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062725 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-cnibin\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062743 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-multus-cni-dir\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062762 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-multus-conf-dir\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062779 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-cni-binary-copy\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062804 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0641e884-c845-499c-9ce6-0c4f1a893b5a-mcd-auth-proxy-config\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062824 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8xqs\" (UniqueName: \"kubernetes.io/projected/0641e884-c845-499c-9ce6-0c4f1a893b5a-kube-api-access-b8xqs\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062849 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-os-release\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.062942 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-var-lib-kubelet\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.063528 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.063553 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.063562 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.063577 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.063585 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:52Z","lastTransitionTime":"2025-12-02T10:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.072340 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.084250 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.093001 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.111877 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.127619 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.139997 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.151338 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.160422 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163578 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-multus-socket-dir-parent\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163625 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-var-lib-cni-multus\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163652 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-cnibin\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163680 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-run-k8s-cni-cncf-io\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163698 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-run-netns\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163721 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-run-multus-certs\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163741 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163765 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2fab88a2-3875-44a4-a926-7c76836b51b8-cni-binary-copy\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163786 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjkh4\" (UniqueName: \"kubernetes.io/projected/2fab88a2-3875-44a4-a926-7c76836b51b8-kube-api-access-cjkh4\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163805 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0641e884-c845-499c-9ce6-0c4f1a893b5a-rootfs\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163828 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-var-lib-cni-bin\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163848 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-hostroot\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163868 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0641e884-c845-499c-9ce6-0c4f1a893b5a-proxy-tls\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163885 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-system-cni-dir\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163907 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-system-cni-dir\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163928 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-os-release\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163965 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2fab88a2-3875-44a4-a926-7c76836b51b8-multus-daemon-config\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.163986 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-etc-kubernetes\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164006 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwpz2\" (UniqueName: \"kubernetes.io/projected/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-kube-api-access-fwpz2\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164047 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-cnibin\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164070 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164089 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-multus-cni-dir\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164108 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-multus-conf-dir\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164146 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0641e884-c845-499c-9ce6-0c4f1a893b5a-mcd-auth-proxy-config\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164174 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8xqs\" (UniqueName: \"kubernetes.io/projected/0641e884-c845-499c-9ce6-0c4f1a893b5a-kube-api-access-b8xqs\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164195 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-cni-binary-copy\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164218 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-os-release\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164241 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-var-lib-kubelet\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164309 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-var-lib-kubelet\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164401 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-multus-socket-dir-parent\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164433 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-var-lib-cni-multus\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164463 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-cnibin\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164496 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-run-k8s-cni-cncf-io\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164528 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-run-netns\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164555 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-run-multus-certs\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164936 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0641e884-c845-499c-9ce6-0c4f1a893b5a-rootfs\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.164992 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-host-var-lib-cni-bin\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.165025 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-hostroot\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.165118 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-system-cni-dir\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.165154 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-system-cni-dir\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.165202 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-os-release\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.165471 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-cnibin\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.165511 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-etc-kubernetes\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.165563 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-multus-cni-dir\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.165653 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-multus-conf-dir\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.165657 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2fab88a2-3875-44a4-a926-7c76836b51b8-os-release\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.167255 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.167289 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.167299 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.167313 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.167322 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:52Z","lastTransitionTime":"2025-12-02T10:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.172849 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.182678 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.191777 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.198289 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.210134 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2fab88a2-3875-44a4-a926-7c76836b51b8-multus-daemon-config\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.210762 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.220890 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.230628 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.237835 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.248031 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.258910 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.270260 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.270476 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.270493 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.270502 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.270516 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.270528 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:52Z","lastTransitionTime":"2025-12-02T10:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.280489 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.290015 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.304192 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.304247 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.304261 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5c95a7de6e071b527fcc72892d8f3688e00f76be943ad8a2ec26ff943e0ed4e6"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.305160 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.305201 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fe37c56af035883a91c80b97ae87fd0503407c2e07e3d8c7c562ca825fc13195"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.306022 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ed5c832e38af9369a496965c8580be25b8bb432718d082b8dfde7dea2be00ca3"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.307533 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hcx25" event={"ID":"7d542278-a5d9-41cd-b125-774fc4cbdb1f","Type":"ContainerStarted","Data":"979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.307569 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hcx25" event={"ID":"7d542278-a5d9-41cd-b125-774fc4cbdb1f","Type":"ContainerStarted","Data":"25fc5df249d0aa1f35c540858e737fd8f17d9cfdef3c34978fef8fb8fe7318b9"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.315627 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.327794 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.339371 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.347915 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.354251 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.365417 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.372213 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.372255 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.372264 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.372280 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.372289 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:52Z","lastTransitionTime":"2025-12-02T10:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.377286 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.389759 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.401665 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.412792 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.414360 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n6sdh"] Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.415206 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.416694 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.416968 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.417264 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.417276 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.417286 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.418104 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.418345 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.427202 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.439762 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.451187 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467148 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-systemd\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467209 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68skn\" (UniqueName: \"kubernetes.io/projected/064b98c4-b388-4c62-bcbc-11037274acdb-kube-api-access-68skn\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467250 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-systemd-units\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467271 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-slash\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467293 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-etc-openvswitch\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467312 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-node-log\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467386 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-kubelet\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467407 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-env-overrides\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467459 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-cni-netd\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467481 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-openvswitch\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467500 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-ovn\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467485 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467590 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467608 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-log-socket\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467629 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-cni-bin\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467645 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-run-netns\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467660 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-var-lib-openvswitch\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467674 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-ovnkube-config\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467709 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-ovnkube-script-lib\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467734 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-run-ovn-kubernetes\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.467765 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/064b98c4-b388-4c62-bcbc-11037274acdb-ovn-node-metrics-cert\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.474422 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.474475 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.474488 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.474507 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.474518 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:52Z","lastTransitionTime":"2025-12-02T10:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.479829 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.492975 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.503700 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.514541 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.527661 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.547678 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.561011 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569286 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-ovnkube-script-lib\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569337 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-run-ovn-kubernetes\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569357 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/064b98c4-b388-4c62-bcbc-11037274acdb-ovn-node-metrics-cert\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569375 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-systemd\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569395 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68skn\" (UniqueName: \"kubernetes.io/projected/064b98c4-b388-4c62-bcbc-11037274acdb-kube-api-access-68skn\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569417 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-systemd-units\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569435 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-slash\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569459 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-etc-openvswitch\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569481 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-node-log\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569470 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-run-ovn-kubernetes\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569520 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-kubelet\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569533 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-slash\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569564 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-etc-openvswitch\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569571 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-env-overrides\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569492 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-systemd\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569572 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-systemd-units\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569616 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-node-log\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569618 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-kubelet\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569725 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-cni-netd\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569774 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-ovn\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569799 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-openvswitch\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569804 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-ovn\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569821 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-cni-netd\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569822 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569849 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-openvswitch\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569856 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569916 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-run-netns\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569965 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-var-lib-openvswitch\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569992 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-run-netns\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.569993 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-log-socket\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.570674 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-cni-bin\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.570706 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-ovnkube-config\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.570354 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-env-overrides\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.570761 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-cni-bin\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.570394 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-ovnkube-script-lib\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.570018 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-log-socket\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.570037 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-var-lib-openvswitch\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.571295 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-ovnkube-config\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.573642 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.577533 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.577656 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.577730 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.577799 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.577867 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:52Z","lastTransitionTime":"2025-12-02T10:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.589716 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.594682 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/064b98c4-b388-4c62-bcbc-11037274acdb-ovn-node-metrics-cert\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.594876 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68skn\" (UniqueName: \"kubernetes.io/projected/064b98c4-b388-4c62-bcbc-11037274acdb-kube-api-access-68skn\") pod \"ovnkube-node-n6sdh\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.680539 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.680577 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.680586 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.680600 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.680611 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:52Z","lastTransitionTime":"2025-12-02T10:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.732404 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:52 crc kubenswrapper[4711]: W1202 10:13:52.753930 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod064b98c4_b388_4c62_bcbc_11037274acdb.slice/crio-a916b32f1d2ea1899786f3bad28d11d5cc126432e31994d471e82f91b4e3153b WatchSource:0}: Error finding container a916b32f1d2ea1899786f3bad28d11d5cc126432e31994d471e82f91b4e3153b: Status 404 returned error can't find the container with id a916b32f1d2ea1899786f3bad28d11d5cc126432e31994d471e82f91b4e3153b Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.772003 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.772190 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.772266 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.772405 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:13:54.772386228 +0000 UTC m=+24.481752675 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.772450 4711 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.772643 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:54.772634316 +0000 UTC m=+24.482000763 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.772480 4711 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.772765 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:54.772757199 +0000 UTC m=+24.482123646 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.782198 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.782241 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.782252 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.782268 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.782279 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:52Z","lastTransitionTime":"2025-12-02T10:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.798460 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.802357 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.807906 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.810829 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.967563 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.967654 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.967816 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.967837 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.967850 4711 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.967896 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:54.967880798 +0000 UTC m=+24.677247245 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.967931 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.968011 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.968029 4711 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:52 crc kubenswrapper[4711]: E1202 10:13:52.968144 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:54.968100253 +0000 UTC m=+24.677466800 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.969064 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.969131 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.969151 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.969170 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.969184 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:52Z","lastTransitionTime":"2025-12-02T10:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.971458 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.971804 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:52 crc kubenswrapper[4711]: I1202 10:13:52.975470 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.041244 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.074002 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.083731 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.083773 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.083787 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.083809 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.083821 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:53Z","lastTransitionTime":"2025-12-02T10:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.083871 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:13:53 crc kubenswrapper[4711]: E1202 10:13:53.083979 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.084039 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:13:53 crc kubenswrapper[4711]: E1202 10:13:53.084087 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.084292 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:53 crc kubenswrapper[4711]: E1202 10:13:53.084351 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.086437 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.088443 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.089397 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.090889 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.091791 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.094877 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.095768 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.096528 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.097575 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.098291 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.099346 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.100061 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.101538 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.102226 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.103347 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.103975 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.105003 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.105621 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.106148 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.107115 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.107305 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.107916 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.108879 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.109541 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.110109 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.111189 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.111697 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.112787 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.113592 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.114723 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.115441 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.116422 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.116936 4711 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.117127 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.118920 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.119831 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.120358 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.120500 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.122081 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.123118 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.123720 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.124805 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.125521 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.126465 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.127218 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.128302 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.128977 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.129969 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.130555 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.131542 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.132523 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.133514 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.134047 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.134628 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.135585 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.135877 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.136353 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.137313 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.142822 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.149703 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0641e884-c845-499c-9ce6-0c4f1a893b5a-proxy-tls\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.156962 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: E1202 10:13:53.164870 4711 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Dec 02 10:13:53 crc kubenswrapper[4711]: E1202 10:13:53.164963 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2fab88a2-3875-44a4-a926-7c76836b51b8-cni-binary-copy podName:2fab88a2-3875-44a4-a926-7c76836b51b8 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:53.664930379 +0000 UTC m=+23.374296826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/2fab88a2-3875-44a4-a926-7c76836b51b8-cni-binary-copy") pod "multus-4qrj7" (UID: "2fab88a2-3875-44a4-a926-7c76836b51b8") : failed to sync configmap cache: timed out waiting for the condition Dec 02 10:13:53 crc kubenswrapper[4711]: E1202 10:13:53.165913 4711 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Dec 02 10:13:53 crc kubenswrapper[4711]: E1202 10:13:53.165944 4711 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 02 10:13:53 crc kubenswrapper[4711]: E1202 10:13:53.165975 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-cni-binary-copy podName:3b9aece8-a05e-47ea-ab7f-b906e93c71c6 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:53.665963348 +0000 UTC m=+23.375329795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-cni-binary-copy") pod "multus-additional-cni-plugins-5xjmc" (UID: "3b9aece8-a05e-47ea-ab7f-b906e93c71c6") : failed to sync configmap cache: timed out waiting for the condition Dec 02 10:13:53 crc kubenswrapper[4711]: E1202 10:13:53.166004 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0641e884-c845-499c-9ce6-0c4f1a893b5a-mcd-auth-proxy-config podName:0641e884-c845-499c-9ce6-0c4f1a893b5a nodeName:}" failed. No retries permitted until 2025-12-02 10:13:53.665991448 +0000 UTC m=+23.375357895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/0641e884-c845-499c-9ce6-0c4f1a893b5a-mcd-auth-proxy-config") pod "machine-config-daemon-9b9cn" (UID: "0641e884-c845-499c-9ce6-0c4f1a893b5a") : failed to sync configmap cache: timed out waiting for the condition Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.169169 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: E1202 10:13:53.181386 4711 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.186227 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.186266 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.186278 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.186294 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.186306 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:53Z","lastTransitionTime":"2025-12-02T10:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.187465 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.199402 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.210447 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.225715 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.240414 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.255014 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.256256 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.267002 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.281910 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.288705 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.288930 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.288943 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.288972 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.288982 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:53Z","lastTransitionTime":"2025-12-02T10:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.294526 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.303311 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.310846 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387" exitCode=0 Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.310925 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387"} Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.311026 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"a916b32f1d2ea1899786f3bad28d11d5cc126432e31994d471e82f91b4e3153b"} Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.318548 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.322189 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.333696 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.354552 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.393612 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.393649 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.393660 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.393677 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.393687 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:53Z","lastTransitionTime":"2025-12-02T10:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.397098 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.425666 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.454406 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.494534 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.495648 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.495675 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.495685 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.495698 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.495709 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:53Z","lastTransitionTime":"2025-12-02T10:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.533834 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.574350 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.603569 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.603627 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.603641 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.603658 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.603673 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:53Z","lastTransitionTime":"2025-12-02T10:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.605128 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 02 10:13:53 crc kubenswrapper[4711]: E1202 10:13:53.612115 4711 projected.go:194] Error preparing data for projected volume kube-api-access-b8xqs for pod openshift-machine-config-operator/machine-config-daemon-9b9cn: failed to sync configmap cache: timed out waiting for the condition Dec 02 10:13:53 crc kubenswrapper[4711]: E1202 10:13:53.612227 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0641e884-c845-499c-9ce6-0c4f1a893b5a-kube-api-access-b8xqs podName:0641e884-c845-499c-9ce6-0c4f1a893b5a nodeName:}" failed. No retries permitted until 2025-12-02 10:13:54.112201109 +0000 UTC m=+23.821567556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b8xqs" (UniqueName: "kubernetes.io/projected/0641e884-c845-499c-9ce6-0c4f1a893b5a-kube-api-access-b8xqs") pod "machine-config-daemon-9b9cn" (UID: "0641e884-c845-499c-9ce6-0c4f1a893b5a") : failed to sync configmap cache: timed out waiting for the condition Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.642069 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.646574 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.665842 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.670766 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwpz2\" (UniqueName: \"kubernetes.io/projected/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-kube-api-access-fwpz2\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.672773 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0641e884-c845-499c-9ce6-0c4f1a893b5a-mcd-auth-proxy-config\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.672813 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-cni-binary-copy\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.672849 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2fab88a2-3875-44a4-a926-7c76836b51b8-cni-binary-copy\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.673394 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2fab88a2-3875-44a4-a926-7c76836b51b8-cni-binary-copy\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.674234 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0641e884-c845-499c-9ce6-0c4f1a893b5a-mcd-auth-proxy-config\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.674740 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b9aece8-a05e-47ea-ab7f-b906e93c71c6-cni-binary-copy\") pod \"multus-additional-cni-plugins-5xjmc\" (UID: \"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\") " pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.675712 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjkh4\" (UniqueName: \"kubernetes.io/projected/2fab88a2-3875-44a4-a926-7c76836b51b8-kube-api-access-cjkh4\") pod \"multus-4qrj7\" (UID: \"2fab88a2-3875-44a4-a926-7c76836b51b8\") " pod="openshift-multus/multus-4qrj7" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.705936 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.705989 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.706001 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.706013 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.706022 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:53Z","lastTransitionTime":"2025-12-02T10:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.715431 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.753172 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.797390 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.809900 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.809963 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.809976 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.809993 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.810002 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:53Z","lastTransitionTime":"2025-12-02T10:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.839227 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.848517 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.855358 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4qrj7" Dec 02 10:13:53 crc kubenswrapper[4711]: W1202 10:13:53.860810 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b9aece8_a05e_47ea_ab7f_b906e93c71c6.slice/crio-0cc87e119a2f4d1aeacc32a01f8c4efc25f0c9729ae616f1fa032be4358bd1cf WatchSource:0}: Error finding container 0cc87e119a2f4d1aeacc32a01f8c4efc25f0c9729ae616f1fa032be4358bd1cf: Status 404 returned error can't find the container with id 0cc87e119a2f4d1aeacc32a01f8c4efc25f0c9729ae616f1fa032be4358bd1cf Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.874667 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: W1202 10:13:53.879454 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fab88a2_3875_44a4_a926_7c76836b51b8.slice/crio-6c4bb63fa85a9054c8521b5d66453de263087d3a3c6bd6ccedc7e3982a0c63bd WatchSource:0}: Error finding container 6c4bb63fa85a9054c8521b5d66453de263087d3a3c6bd6ccedc7e3982a0c63bd: Status 404 returned error can't find the container with id 6c4bb63fa85a9054c8521b5d66453de263087d3a3c6bd6ccedc7e3982a0c63bd Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.914503 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.914549 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.914559 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.914577 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.914587 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:53Z","lastTransitionTime":"2025-12-02T10:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.923023 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.956516 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:53 crc kubenswrapper[4711]: I1202 10:13:53.995159 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:53Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.016916 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.016976 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.016985 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.016998 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.017009 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:54Z","lastTransitionTime":"2025-12-02T10:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.038335 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.054597 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-g7srl"] Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.054978 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g7srl" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.075891 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.085709 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.105853 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.145914 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.145914 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.146022 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.146043 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.146060 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.146073 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:54Z","lastTransitionTime":"2025-12-02T10:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.147262 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.176736 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bfcbeeaf-d773-49ac-bae3-b457ca7847d3-serviceca\") pod \"node-ca-g7srl\" (UID: \"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\") " pod="openshift-image-registry/node-ca-g7srl" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.176843 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bbb5\" (UniqueName: \"kubernetes.io/projected/bfcbeeaf-d773-49ac-bae3-b457ca7847d3-kube-api-access-5bbb5\") pod \"node-ca-g7srl\" (UID: \"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\") " pod="openshift-image-registry/node-ca-g7srl" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.176936 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8xqs\" (UniqueName: \"kubernetes.io/projected/0641e884-c845-499c-9ce6-0c4f1a893b5a-kube-api-access-b8xqs\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.177517 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfcbeeaf-d773-49ac-bae3-b457ca7847d3-host\") pod \"node-ca-g7srl\" (UID: \"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\") " pod="openshift-image-registry/node-ca-g7srl" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.180985 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8xqs\" (UniqueName: \"kubernetes.io/projected/0641e884-c845-499c-9ce6-0c4f1a893b5a-kube-api-access-b8xqs\") pod \"machine-config-daemon-9b9cn\" (UID: \"0641e884-c845-499c-9ce6-0c4f1a893b5a\") " pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.200107 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.235161 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.248063 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.248109 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.248121 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.248135 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.248145 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:54Z","lastTransitionTime":"2025-12-02T10:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.273251 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.278711 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bbb5\" (UniqueName: \"kubernetes.io/projected/bfcbeeaf-d773-49ac-bae3-b457ca7847d3-kube-api-access-5bbb5\") pod \"node-ca-g7srl\" (UID: \"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\") " pod="openshift-image-registry/node-ca-g7srl" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.278793 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfcbeeaf-d773-49ac-bae3-b457ca7847d3-host\") pod \"node-ca-g7srl\" (UID: \"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\") " pod="openshift-image-registry/node-ca-g7srl" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.278865 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bfcbeeaf-d773-49ac-bae3-b457ca7847d3-serviceca\") pod \"node-ca-g7srl\" (UID: \"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\") " pod="openshift-image-registry/node-ca-g7srl" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.279004 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfcbeeaf-d773-49ac-bae3-b457ca7847d3-host\") pod \"node-ca-g7srl\" (UID: \"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\") " pod="openshift-image-registry/node-ca-g7srl" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.280564 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bfcbeeaf-d773-49ac-bae3-b457ca7847d3-serviceca\") pod \"node-ca-g7srl\" (UID: \"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\") " pod="openshift-image-registry/node-ca-g7srl" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.315242 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.318036 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.318066 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.318080 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.318092 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.318102 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.318112 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.318942 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4qrj7" event={"ID":"2fab88a2-3875-44a4-a926-7c76836b51b8","Type":"ContainerStarted","Data":"04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.319010 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4qrj7" event={"ID":"2fab88a2-3875-44a4-a926-7c76836b51b8","Type":"ContainerStarted","Data":"6c4bb63fa85a9054c8521b5d66453de263087d3a3c6bd6ccedc7e3982a0c63bd"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.320276 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" event={"ID":"3b9aece8-a05e-47ea-ab7f-b906e93c71c6","Type":"ContainerStarted","Data":"5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.320314 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" event={"ID":"3b9aece8-a05e-47ea-ab7f-b906e93c71c6","Type":"ContainerStarted","Data":"0cc87e119a2f4d1aeacc32a01f8c4efc25f0c9729ae616f1fa032be4358bd1cf"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.323449 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bbb5\" (UniqueName: \"kubernetes.io/projected/bfcbeeaf-d773-49ac-bae3-b457ca7847d3-kube-api-access-5bbb5\") pod \"node-ca-g7srl\" (UID: \"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\") " pod="openshift-image-registry/node-ca-g7srl" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.335771 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.349793 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.349820 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.349829 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.349842 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.349857 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:54Z","lastTransitionTime":"2025-12-02T10:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.368394 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g7srl" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.374117 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.412615 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.439114 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:13:54 crc kubenswrapper[4711]: W1202 10:13:54.447973 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfcbeeaf_d773_49ac_bae3_b457ca7847d3.slice/crio-d64a1717314d0e2724c387bbf5c293a7c9e052cfc82d6247c11daf41483e2424 WatchSource:0}: Error finding container d64a1717314d0e2724c387bbf5c293a7c9e052cfc82d6247c11daf41483e2424: Status 404 returned error can't find the container with id d64a1717314d0e2724c387bbf5c293a7c9e052cfc82d6247c11daf41483e2424 Dec 02 10:13:54 crc kubenswrapper[4711]: W1202 10:13:54.450780 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0641e884_c845_499c_9ce6_0c4f1a893b5a.slice/crio-d17363161b328dfd9a640b959126a701bc0db8749d7f8ab9c594525723631fdb WatchSource:0}: Error finding container d17363161b328dfd9a640b959126a701bc0db8749d7f8ab9c594525723631fdb: Status 404 returned error can't find the container with id d17363161b328dfd9a640b959126a701bc0db8749d7f8ab9c594525723631fdb Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.451355 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.451390 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.451403 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.451418 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.451429 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:54Z","lastTransitionTime":"2025-12-02T10:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.456101 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.492232 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.533505 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.562250 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.562287 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.562298 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.562314 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.562325 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:54Z","lastTransitionTime":"2025-12-02T10:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.586558 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.614377 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.653482 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.667001 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.667042 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.667050 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.667063 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.667071 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:54Z","lastTransitionTime":"2025-12-02T10:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.765009 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.769417 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.769451 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.769461 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.769477 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.769488 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:54Z","lastTransitionTime":"2025-12-02T10:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.782006 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.791470 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.791636 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:13:58.791611479 +0000 UTC m=+28.500977926 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.791696 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.791777 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.791812 4711 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.791891 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:58.791882807 +0000 UTC m=+28.501249254 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.791892 4711 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.791946 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:58.791929338 +0000 UTC m=+28.501295825 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.793566 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.823322 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.877420 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.877462 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.877471 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.877487 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.877499 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:54Z","lastTransitionTime":"2025-12-02T10:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.931318 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.966471 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.979881 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.979927 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.979938 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.979967 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.979977 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:54Z","lastTransitionTime":"2025-12-02T10:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.984824 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.993257 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:13:54 crc kubenswrapper[4711]: I1202 10:13:54.993330 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.993479 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.993489 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.993530 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.993543 4711 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.993600 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:58.993580175 +0000 UTC m=+28.702946692 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.993500 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.993666 4711 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:54 crc kubenswrapper[4711]: E1202 10:13:54.993721 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:13:58.993697768 +0000 UTC m=+28.703064215 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.001016 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:54Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.012211 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.051819 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.080392 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:13:55 crc kubenswrapper[4711]: E1202 10:13:55.081464 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.081738 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:55 crc kubenswrapper[4711]: E1202 10:13:55.081941 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.082025 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:13:55 crc kubenswrapper[4711]: E1202 10:13:55.082252 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.085921 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.085972 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.085983 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.085996 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.086005 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:55Z","lastTransitionTime":"2025-12-02T10:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.095126 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.134650 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.173641 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.188154 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.188200 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.188212 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.188230 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.188243 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:55Z","lastTransitionTime":"2025-12-02T10:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.215572 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.252729 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.298797 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.298823 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.298832 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.298845 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.298853 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:55Z","lastTransitionTime":"2025-12-02T10:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.305350 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.326977 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g7srl" event={"ID":"bfcbeeaf-d773-49ac-bae3-b457ca7847d3","Type":"ContainerStarted","Data":"71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.327038 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g7srl" event={"ID":"bfcbeeaf-d773-49ac-bae3-b457ca7847d3","Type":"ContainerStarted","Data":"d64a1717314d0e2724c387bbf5c293a7c9e052cfc82d6247c11daf41483e2424"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.328577 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.328636 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.328651 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"d17363161b328dfd9a640b959126a701bc0db8749d7f8ab9c594525723631fdb"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.330076 4711 generic.go:334] "Generic (PLEG): container finished" podID="3b9aece8-a05e-47ea-ab7f-b906e93c71c6" containerID="5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc" exitCode=0 Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.330131 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" event={"ID":"3b9aece8-a05e-47ea-ab7f-b906e93c71c6","Type":"ContainerDied","Data":"5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.347156 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.376766 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.402729 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.402771 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.402779 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.402795 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.403100 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:55Z","lastTransitionTime":"2025-12-02T10:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.414458 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.461416 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.497189 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.506782 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.506820 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.506833 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.506850 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.506862 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:55Z","lastTransitionTime":"2025-12-02T10:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.532481 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.572961 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.615912 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.615962 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.615983 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.616004 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.616014 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:55Z","lastTransitionTime":"2025-12-02T10:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.629272 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.652407 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.694739 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.717791 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.717823 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.717831 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.717844 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.717853 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:55Z","lastTransitionTime":"2025-12-02T10:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.736129 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.774915 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.816788 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.820101 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.820144 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.820156 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.820170 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.820181 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:55Z","lastTransitionTime":"2025-12-02T10:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.856465 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.906275 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.922567 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.922600 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.922609 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.922622 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:55 crc kubenswrapper[4711]: I1202 10:13:55.922631 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:55Z","lastTransitionTime":"2025-12-02T10:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.027210 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.027245 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.027256 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.027272 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.027281 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:56Z","lastTransitionTime":"2025-12-02T10:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.129198 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.129235 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.129243 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.129260 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.129271 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:56Z","lastTransitionTime":"2025-12-02T10:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.238427 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.238482 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.238493 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.238510 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.238521 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:56Z","lastTransitionTime":"2025-12-02T10:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.345755 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.346082 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.346092 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.346110 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.346119 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:56Z","lastTransitionTime":"2025-12-02T10:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.347226 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" event={"ID":"3b9aece8-a05e-47ea-ab7f-b906e93c71c6","Type":"ContainerStarted","Data":"19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f"} Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.365612 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.379088 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.393351 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.413067 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.429562 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.442823 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.448898 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.448979 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.448994 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.449010 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.449056 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:56Z","lastTransitionTime":"2025-12-02T10:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.457936 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.470554 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.483221 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.498682 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.514710 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.528512 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.539944 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.553225 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.553252 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.553261 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.553276 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.553285 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:56Z","lastTransitionTime":"2025-12-02T10:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.564122 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.656120 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.656195 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.656227 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.656266 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.656286 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:56Z","lastTransitionTime":"2025-12-02T10:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.759077 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.759116 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.759126 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.759141 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.759151 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:56Z","lastTransitionTime":"2025-12-02T10:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.862380 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.862438 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.862446 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.862480 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.862497 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:56Z","lastTransitionTime":"2025-12-02T10:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.964868 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.965177 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.965186 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.965200 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:56 crc kubenswrapper[4711]: I1202 10:13:56.965213 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:56Z","lastTransitionTime":"2025-12-02T10:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.067253 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.067337 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.067349 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.067371 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.067385 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:57Z","lastTransitionTime":"2025-12-02T10:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.077599 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.077676 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.077647 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:57 crc kubenswrapper[4711]: E1202 10:13:57.077881 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:13:57 crc kubenswrapper[4711]: E1202 10:13:57.078007 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:13:57 crc kubenswrapper[4711]: E1202 10:13:57.078163 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.170758 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.170819 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.170838 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.170862 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.170889 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:57Z","lastTransitionTime":"2025-12-02T10:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.274202 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.274289 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.274313 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.274383 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.274408 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:57Z","lastTransitionTime":"2025-12-02T10:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.354077 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c"} Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.356800 4711 generic.go:334] "Generic (PLEG): container finished" podID="3b9aece8-a05e-47ea-ab7f-b906e93c71c6" containerID="19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f" exitCode=0 Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.356829 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" event={"ID":"3b9aece8-a05e-47ea-ab7f-b906e93c71c6","Type":"ContainerDied","Data":"19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f"} Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.376130 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.376167 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.376175 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.376188 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.376197 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:57Z","lastTransitionTime":"2025-12-02T10:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.379130 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.393783 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.408011 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.419846 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.432625 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.447012 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.459394 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.471230 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.478214 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.478260 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.478271 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.478298 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.478311 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:57Z","lastTransitionTime":"2025-12-02T10:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.485643 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.498717 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.509589 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.524635 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.537006 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.553963 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:57Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.580629 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.580658 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.580668 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.580681 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.580690 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:57Z","lastTransitionTime":"2025-12-02T10:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.683825 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.683868 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.683877 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.683891 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.683902 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:57Z","lastTransitionTime":"2025-12-02T10:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.786184 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.786241 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.786258 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.786290 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.786307 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:57Z","lastTransitionTime":"2025-12-02T10:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.889088 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.889168 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.889178 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.889201 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.889213 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:57Z","lastTransitionTime":"2025-12-02T10:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.993287 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.993359 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.993389 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.993450 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:57 crc kubenswrapper[4711]: I1202 10:13:57.993470 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:57Z","lastTransitionTime":"2025-12-02T10:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.096000 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.096061 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.096071 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.096135 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.096191 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:58Z","lastTransitionTime":"2025-12-02T10:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.199169 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.199331 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.199345 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.199370 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.199384 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:58Z","lastTransitionTime":"2025-12-02T10:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.301392 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.301455 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.301469 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.301489 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.301502 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:58Z","lastTransitionTime":"2025-12-02T10:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.362180 4711 generic.go:334] "Generic (PLEG): container finished" podID="3b9aece8-a05e-47ea-ab7f-b906e93c71c6" containerID="7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097" exitCode=0 Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.362209 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" event={"ID":"3b9aece8-a05e-47ea-ab7f-b906e93c71c6","Type":"ContainerDied","Data":"7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097"} Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.376150 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.396119 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.409622 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.409690 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.409703 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.409726 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.409739 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:58Z","lastTransitionTime":"2025-12-02T10:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.417668 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.433368 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.450347 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.463689 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.478937 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.508165 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.512085 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.512124 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.512133 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.512149 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.512163 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:58Z","lastTransitionTime":"2025-12-02T10:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.539035 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.557614 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.580127 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.599854 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.609431 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.614114 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.614159 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.614173 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.614190 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.614201 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:58Z","lastTransitionTime":"2025-12-02T10:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.621469 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:58Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.717666 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.718154 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.718169 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.718189 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.718202 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:58Z","lastTransitionTime":"2025-12-02T10:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.796326 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:13:58 crc kubenswrapper[4711]: E1202 10:13:58.796670 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:14:06.796544948 +0000 UTC m=+36.505911445 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.796815 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:58 crc kubenswrapper[4711]: E1202 10:13:58.797080 4711 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.797082 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:58 crc kubenswrapper[4711]: E1202 10:13:58.797200 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:06.797161754 +0000 UTC m=+36.506528201 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:13:58 crc kubenswrapper[4711]: E1202 10:13:58.797212 4711 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:13:58 crc kubenswrapper[4711]: E1202 10:13:58.797323 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:06.797297848 +0000 UTC m=+36.506664455 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.821084 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.821130 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.821139 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.821161 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.821171 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:58Z","lastTransitionTime":"2025-12-02T10:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.924473 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.924502 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.924510 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.924523 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:58 crc kubenswrapper[4711]: I1202 10:13:58.924535 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:58Z","lastTransitionTime":"2025-12-02T10:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.000050 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.000130 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:13:59 crc kubenswrapper[4711]: E1202 10:13:59.000286 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:13:59 crc kubenswrapper[4711]: E1202 10:13:59.000308 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:13:59 crc kubenswrapper[4711]: E1202 10:13:59.000315 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:13:59 crc kubenswrapper[4711]: E1202 10:13:59.000352 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:13:59 crc kubenswrapper[4711]: E1202 10:13:59.000366 4711 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:59 crc kubenswrapper[4711]: E1202 10:13:59.000417 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:07.000401715 +0000 UTC m=+36.709768162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:59 crc kubenswrapper[4711]: E1202 10:13:59.000331 4711 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:59 crc kubenswrapper[4711]: E1202 10:13:59.000778 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:07.000763714 +0000 UTC m=+36.710130231 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.027391 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.027420 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.027429 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.027445 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.027456 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:59Z","lastTransitionTime":"2025-12-02T10:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.077752 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.078068 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:13:59 crc kubenswrapper[4711]: E1202 10:13:59.078196 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.078496 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:13:59 crc kubenswrapper[4711]: E1202 10:13:59.078552 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:13:59 crc kubenswrapper[4711]: E1202 10:13:59.078599 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.129849 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.129881 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.129892 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.129909 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.129921 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:59Z","lastTransitionTime":"2025-12-02T10:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.232048 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.232082 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.232091 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.232107 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.232117 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:59Z","lastTransitionTime":"2025-12-02T10:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.334449 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.334497 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.334505 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.334521 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.334531 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:59Z","lastTransitionTime":"2025-12-02T10:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.366523 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.378495 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335"} Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.378747 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.380004 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.386070 4711 generic.go:334] "Generic (PLEG): container finished" podID="3b9aece8-a05e-47ea-ab7f-b906e93c71c6" containerID="993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9" exitCode=0 Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.386269 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" event={"ID":"3b9aece8-a05e-47ea-ab7f-b906e93c71c6","Type":"ContainerDied","Data":"993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9"} Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.397132 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.416358 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.418019 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.429577 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.436387 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.436423 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.436457 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.436474 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.436483 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:59Z","lastTransitionTime":"2025-12-02T10:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.441154 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.451859 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.466243 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.489756 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.502592 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.515898 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.530310 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.538508 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.538554 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.538563 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.538579 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.538598 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:59Z","lastTransitionTime":"2025-12-02T10:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.542736 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.553685 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.566757 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.596464 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.608397 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.621321 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.629998 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.640905 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.640942 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.640970 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.640987 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.640999 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:59Z","lastTransitionTime":"2025-12-02T10:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.642285 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.654261 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.662915 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.672448 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.687284 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.698403 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.708295 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.721306 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.734104 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.743315 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:13:59Z is after 2025-08-24T17:21:41Z" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.743765 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.743794 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.743804 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.743818 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.743829 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:59Z","lastTransitionTime":"2025-12-02T10:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.846515 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.846553 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.846562 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.846578 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.846587 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:59Z","lastTransitionTime":"2025-12-02T10:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.949733 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.949812 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.949839 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.949867 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:13:59 crc kubenswrapper[4711]: I1202 10:13:59.949885 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:13:59Z","lastTransitionTime":"2025-12-02T10:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.052257 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.052298 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.052309 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.052324 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.052335 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.185796 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.185848 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.185858 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.185875 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.185885 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.289329 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.289364 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.289373 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.289387 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.289397 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.391294 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.391328 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.391336 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.391348 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.391358 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.393276 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" event={"ID":"3b9aece8-a05e-47ea-ab7f-b906e93c71c6","Type":"ContainerStarted","Data":"c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.393391 4711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.393917 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.494550 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.494590 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.494602 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.494621 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.494635 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.494741 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.508586 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.513242 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.527348 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.540267 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.550400 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.565708 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.579554 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.617904 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.619168 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.619203 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.619215 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.619232 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.619243 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.628013 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.638023 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.638057 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.638067 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.638080 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.638090 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.648769 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: E1202 10:14:00.649572 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.660483 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.660517 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.660525 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.660539 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.660548 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.661899 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: E1202 10:14:00.671500 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.672683 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.674617 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.674649 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.674700 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.674718 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.674729 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.684583 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: E1202 10:14:00.684663 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.688177 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.688211 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.688223 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.688239 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.688251 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: E1202 10:14:00.700863 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.715626 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.715913 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.715929 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.715971 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.715993 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.718608 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: E1202 10:14:00.727720 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: E1202 10:14:00.727964 4711 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.730290 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.730331 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.730342 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.730359 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.730370 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.734140 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.752082 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.762844 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.777396 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.789026 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.807009 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.825401 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.832376 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.832406 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.832414 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.832427 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.832435 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.842837 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.862087 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.873244 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.884912 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.897150 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.908567 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.918110 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.935012 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.935042 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.935051 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.935065 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:00 crc kubenswrapper[4711]: I1202 10:14:00.935073 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:00Z","lastTransitionTime":"2025-12-02T10:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.037774 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.037810 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.037822 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.037839 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.037851 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:01Z","lastTransitionTime":"2025-12-02T10:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.081344 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:01 crc kubenswrapper[4711]: E1202 10:14:01.081465 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.081775 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:01 crc kubenswrapper[4711]: E1202 10:14:01.081848 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.082771 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:01 crc kubenswrapper[4711]: E1202 10:14:01.083004 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.105942 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.119700 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.140800 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.140840 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.140851 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.140868 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.140879 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:01Z","lastTransitionTime":"2025-12-02T10:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.141735 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.176896 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.197440 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.212940 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.226513 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.240719 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.246378 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.246411 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.246419 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.246436 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.246447 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:01Z","lastTransitionTime":"2025-12-02T10:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.253909 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.267652 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.283390 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.302848 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.353727 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.353784 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.353795 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.353812 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.353847 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:01Z","lastTransitionTime":"2025-12-02T10:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.363812 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.380659 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.396408 4711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.456064 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.456091 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.456098 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.456111 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.456121 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:01Z","lastTransitionTime":"2025-12-02T10:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.558446 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.558504 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.558515 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.558531 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.558541 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:01Z","lastTransitionTime":"2025-12-02T10:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.660421 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.660455 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.660465 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.660482 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.660494 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:01Z","lastTransitionTime":"2025-12-02T10:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.762810 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.762844 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.762852 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.762866 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.762875 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:01Z","lastTransitionTime":"2025-12-02T10:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.865897 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.866008 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.866035 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.866068 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.866093 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:01Z","lastTransitionTime":"2025-12-02T10:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.969357 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.969422 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.969435 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.969456 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:01 crc kubenswrapper[4711]: I1202 10:14:01.969469 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:01Z","lastTransitionTime":"2025-12-02T10:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.072230 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.072272 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.072281 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.072295 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.072304 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:02Z","lastTransitionTime":"2025-12-02T10:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.174471 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.174521 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.174536 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.174553 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.174566 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:02Z","lastTransitionTime":"2025-12-02T10:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.277231 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.277299 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.277321 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.277346 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.277371 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:02Z","lastTransitionTime":"2025-12-02T10:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.379895 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.379924 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.379932 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.379945 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.379974 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:02Z","lastTransitionTime":"2025-12-02T10:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.415325 4711 generic.go:334] "Generic (PLEG): container finished" podID="3b9aece8-a05e-47ea-ab7f-b906e93c71c6" containerID="c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2" exitCode=0 Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.415427 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" event={"ID":"3b9aece8-a05e-47ea-ab7f-b906e93c71c6","Type":"ContainerDied","Data":"c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2"} Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.415475 4711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.439595 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.473281 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.487480 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.507627 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.521770 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.529010 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.529048 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.529060 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.529078 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.529090 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:02Z","lastTransitionTime":"2025-12-02T10:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.533269 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.546489 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.561247 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.587824 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.613163 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.628831 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.632155 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.632207 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.632218 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.632239 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.632251 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:02Z","lastTransitionTime":"2025-12-02T10:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.644565 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.657877 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.668713 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.734705 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.734752 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.734764 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.734783 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.734795 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:02Z","lastTransitionTime":"2025-12-02T10:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.850336 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.850382 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.850394 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.850414 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.850425 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:02Z","lastTransitionTime":"2025-12-02T10:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.952873 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.952928 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.952939 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.952985 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:02 crc kubenswrapper[4711]: I1202 10:14:02.952999 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:02Z","lastTransitionTime":"2025-12-02T10:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.055174 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.055215 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.055224 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.055240 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.055252 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:03Z","lastTransitionTime":"2025-12-02T10:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.077715 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.077771 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.077771 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:03 crc kubenswrapper[4711]: E1202 10:14:03.077876 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:03 crc kubenswrapper[4711]: E1202 10:14:03.078075 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:03 crc kubenswrapper[4711]: E1202 10:14:03.078243 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.158084 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.158170 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.158179 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.158195 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.158205 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:03Z","lastTransitionTime":"2025-12-02T10:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.261270 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.261312 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.261322 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.261339 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.261350 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:03Z","lastTransitionTime":"2025-12-02T10:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.363802 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.363828 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.363836 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.363850 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.363861 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:03Z","lastTransitionTime":"2025-12-02T10:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.422016 4711 generic.go:334] "Generic (PLEG): container finished" podID="3b9aece8-a05e-47ea-ab7f-b906e93c71c6" containerID="a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda" exitCode=0 Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.422053 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" event={"ID":"3b9aece8-a05e-47ea-ab7f-b906e93c71c6","Type":"ContainerDied","Data":"a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda"} Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.436162 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.449461 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.463114 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.467525 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.467564 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.467573 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.467588 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.467599 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:03Z","lastTransitionTime":"2025-12-02T10:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.478116 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.492012 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.503107 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.517836 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.531036 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.543254 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.554354 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.569465 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.569504 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.569513 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.569527 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.569536 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:03Z","lastTransitionTime":"2025-12-02T10:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.572568 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.587572 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.603344 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.618005 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:03Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.671786 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.671807 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.671818 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.671831 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.671839 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:03Z","lastTransitionTime":"2025-12-02T10:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.773789 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.773834 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.773844 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.773858 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.773866 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:03Z","lastTransitionTime":"2025-12-02T10:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.876294 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.876360 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.876374 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.876394 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.876427 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:03Z","lastTransitionTime":"2025-12-02T10:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.979617 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.979683 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.979696 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.979713 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:03 crc kubenswrapper[4711]: I1202 10:14:03.979726 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:03Z","lastTransitionTime":"2025-12-02T10:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.082572 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.082609 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.082619 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.082649 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.082659 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:04Z","lastTransitionTime":"2025-12-02T10:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.185851 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.185927 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.185937 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.185977 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.185987 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:04Z","lastTransitionTime":"2025-12-02T10:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.288022 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.288075 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.288087 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.288109 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.288122 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:04Z","lastTransitionTime":"2025-12-02T10:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.371311 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s"] Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.372163 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.376563 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.376977 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.390052 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.391269 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.391330 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.391344 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.391360 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.391372 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:04Z","lastTransitionTime":"2025-12-02T10:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.400675 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.413683 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.425830 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.430788 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" event={"ID":"3b9aece8-a05e-47ea-ab7f-b906e93c71c6","Type":"ContainerStarted","Data":"3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2"} Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.432662 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/0.log" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.435695 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335" exitCode=1 Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.435742 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335"} Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.436782 4711 scope.go:117] "RemoveContainer" containerID="68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.439860 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.451032 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.468659 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.483344 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc6d8705-9138-499d-bacc-6464f4cca9df-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rh62s\" (UID: \"cc6d8705-9138-499d-bacc-6464f4cca9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.483385 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc6d8705-9138-499d-bacc-6464f4cca9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rh62s\" (UID: \"cc6d8705-9138-499d-bacc-6464f4cca9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.483403 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc6d8705-9138-499d-bacc-6464f4cca9df-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rh62s\" (UID: \"cc6d8705-9138-499d-bacc-6464f4cca9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.483439 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jv6b\" (UniqueName: \"kubernetes.io/projected/cc6d8705-9138-499d-bacc-6464f4cca9df-kube-api-access-9jv6b\") pod \"ovnkube-control-plane-749d76644c-rh62s\" (UID: \"cc6d8705-9138-499d-bacc-6464f4cca9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.490550 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.494577 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.494639 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.494675 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.494690 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.494700 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:04Z","lastTransitionTime":"2025-12-02T10:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.503641 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.517308 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.533459 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.546257 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.559982 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.570776 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.585107 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jv6b\" (UniqueName: \"kubernetes.io/projected/cc6d8705-9138-499d-bacc-6464f4cca9df-kube-api-access-9jv6b\") pod \"ovnkube-control-plane-749d76644c-rh62s\" (UID: \"cc6d8705-9138-499d-bacc-6464f4cca9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.585228 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc6d8705-9138-499d-bacc-6464f4cca9df-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rh62s\" (UID: \"cc6d8705-9138-499d-bacc-6464f4cca9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.585253 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc6d8705-9138-499d-bacc-6464f4cca9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rh62s\" (UID: \"cc6d8705-9138-499d-bacc-6464f4cca9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.585280 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc6d8705-9138-499d-bacc-6464f4cca9df-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rh62s\" (UID: \"cc6d8705-9138-499d-bacc-6464f4cca9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.585355 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.585904 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc6d8705-9138-499d-bacc-6464f4cca9df-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rh62s\" (UID: \"cc6d8705-9138-499d-bacc-6464f4cca9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.586152 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc6d8705-9138-499d-bacc-6464f4cca9df-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rh62s\" (UID: \"cc6d8705-9138-499d-bacc-6464f4cca9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.592520 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc6d8705-9138-499d-bacc-6464f4cca9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rh62s\" (UID: \"cc6d8705-9138-499d-bacc-6464f4cca9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.600565 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.600636 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.600647 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.600668 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.600683 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:04Z","lastTransitionTime":"2025-12-02T10:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.605246 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.612630 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.613619 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jv6b\" (UniqueName: \"kubernetes.io/projected/cc6d8705-9138-499d-bacc-6464f4cca9df-kube-api-access-9jv6b\") pod \"ovnkube-control-plane-749d76644c-rh62s\" (UID: \"cc6d8705-9138-499d-bacc-6464f4cca9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.622115 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.639483 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.661883 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.682046 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.688004 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.698349 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: W1202 10:14:04.702488 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc6d8705_9138_499d_bacc_6464f4cca9df.slice/crio-04f7998844e228e48705dfa1d4d52a3203aaf6a2b79d48faad5189d8a6335e50 WatchSource:0}: Error finding container 04f7998844e228e48705dfa1d4d52a3203aaf6a2b79d48faad5189d8a6335e50: Status 404 returned error can't find the container with id 04f7998844e228e48705dfa1d4d52a3203aaf6a2b79d48faad5189d8a6335e50 Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.704009 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.704038 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.704048 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.704066 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.704077 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:04Z","lastTransitionTime":"2025-12-02T10:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.724752 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.751697 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.784663 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.801986 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.814449 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.814515 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.814526 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.814576 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.814600 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:04Z","lastTransitionTime":"2025-12-02T10:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.824547 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.838860 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.857029 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.879907 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.901904 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 10:14:03.797272 6232 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:14:03.797321 6232 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 10:14:03.797326 6232 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 10:14:03.797337 6232 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:14:03.797356 6232 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:14:03.797363 6232 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:14:03.797406 6232 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:14:03.797413 6232 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 10:14:03.797422 6232 factory.go:656] Stopping watch factory\\\\nI1202 10:14:03.797432 6232 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:14:03.797435 6232 ovnkube.go:599] Stopped ovnkube\\\\nI1202 10:14:03.797440 6232 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 10:14:03.797452 6232 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:14:03.797464 6232 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:14:03.797466 6232 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:04Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.917261 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.917297 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.917308 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.917324 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:04 crc kubenswrapper[4711]: I1202 10:14:04.917334 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:04Z","lastTransitionTime":"2025-12-02T10:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.020251 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.020313 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.020327 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.020351 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.020359 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:05Z","lastTransitionTime":"2025-12-02T10:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.077896 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.077896 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.077907 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:05 crc kubenswrapper[4711]: E1202 10:14:05.078226 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:05 crc kubenswrapper[4711]: E1202 10:14:05.078345 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:05 crc kubenswrapper[4711]: E1202 10:14:05.078470 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.123709 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.123758 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.123767 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.123784 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.123795 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:05Z","lastTransitionTime":"2025-12-02T10:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.227885 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.227990 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.228032 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.228069 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.228094 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:05Z","lastTransitionTime":"2025-12-02T10:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.332434 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.332518 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.332537 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.332603 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.332637 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:05Z","lastTransitionTime":"2025-12-02T10:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.434901 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.434979 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.435007 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.435042 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.435058 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:05Z","lastTransitionTime":"2025-12-02T10:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.441730 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/0.log" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.446179 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.446799 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.447290 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" event={"ID":"cc6d8705-9138-499d-bacc-6464f4cca9df","Type":"ContainerStarted","Data":"04f7998844e228e48705dfa1d4d52a3203aaf6a2b79d48faad5189d8a6335e50"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.467281 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.490126 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.493109 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-c82q2"] Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.494380 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:05 crc kubenswrapper[4711]: E1202 10:14:05.494489 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.506335 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.522144 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.542843 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.544274 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.544326 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.544339 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.544367 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.544384 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:05Z","lastTransitionTime":"2025-12-02T10:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.561620 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.577547 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.596014 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.596599 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nhx2\" (UniqueName: \"kubernetes.io/projected/87347875-9865-4380-a0ea-3fde5596dce7-kube-api-access-5nhx2\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.601310 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.631504 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 10:14:03.797272 6232 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:14:03.797321 6232 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 10:14:03.797326 6232 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 10:14:03.797337 6232 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:14:03.797356 6232 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:14:03.797363 6232 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:14:03.797406 6232 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:14:03.797413 6232 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 10:14:03.797422 6232 factory.go:656] Stopping watch factory\\\\nI1202 10:14:03.797432 6232 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:14:03.797435 6232 ovnkube.go:599] Stopped ovnkube\\\\nI1202 10:14:03.797440 6232 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 10:14:03.797452 6232 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:14:03.797464 6232 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:14:03.797466 6232 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.646725 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.646768 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.646777 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.646792 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.646804 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:05Z","lastTransitionTime":"2025-12-02T10:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.652173 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.678359 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.693038 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.698062 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nhx2\" (UniqueName: \"kubernetes.io/projected/87347875-9865-4380-a0ea-3fde5596dce7-kube-api-access-5nhx2\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.698170 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:05 crc kubenswrapper[4711]: E1202 10:14:05.698318 4711 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:05 crc kubenswrapper[4711]: E1202 10:14:05.698440 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs podName:87347875-9865-4380-a0ea-3fde5596dce7 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:06.198401028 +0000 UTC m=+35.907767485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs") pod "network-metrics-daemon-c82q2" (UID: "87347875-9865-4380-a0ea-3fde5596dce7") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.707566 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.717585 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nhx2\" (UniqueName: \"kubernetes.io/projected/87347875-9865-4380-a0ea-3fde5596dce7-kube-api-access-5nhx2\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.726151 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.743508 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.749623 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.749660 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.749669 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.749686 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.749703 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:05Z","lastTransitionTime":"2025-12-02T10:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.757626 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.773570 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.788501 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.801620 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.814112 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.823455 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.832545 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.846669 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.851371 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.851416 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.851426 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.851443 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.851453 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:05Z","lastTransitionTime":"2025-12-02T10:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.867389 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 10:14:03.797272 6232 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:14:03.797321 6232 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 10:14:03.797326 6232 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 10:14:03.797337 6232 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:14:03.797356 6232 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:14:03.797363 6232 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:14:03.797406 6232 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:14:03.797413 6232 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 10:14:03.797422 6232 factory.go:656] Stopping watch factory\\\\nI1202 10:14:03.797432 6232 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:14:03.797435 6232 ovnkube.go:599] Stopped ovnkube\\\\nI1202 10:14:03.797440 6232 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 10:14:03.797452 6232 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:14:03.797464 6232 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:14:03.797466 6232 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.881807 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.894505 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.906749 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.924376 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.935272 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.949413 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.953830 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.953884 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.953898 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.953916 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.953929 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:05Z","lastTransitionTime":"2025-12-02T10:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:05 crc kubenswrapper[4711]: I1202 10:14:05.960109 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.055740 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.055795 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.055809 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.055827 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.055839 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:06Z","lastTransitionTime":"2025-12-02T10:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.157754 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.157787 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.157796 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.157809 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.157818 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:06Z","lastTransitionTime":"2025-12-02T10:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.203576 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:06 crc kubenswrapper[4711]: E1202 10:14:06.203747 4711 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:06 crc kubenswrapper[4711]: E1202 10:14:06.203825 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs podName:87347875-9865-4380-a0ea-3fde5596dce7 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:07.203806219 +0000 UTC m=+36.913172666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs") pod "network-metrics-daemon-c82q2" (UID: "87347875-9865-4380-a0ea-3fde5596dce7") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.260409 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.260454 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.260465 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.260483 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.260496 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:06Z","lastTransitionTime":"2025-12-02T10:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.362914 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.363006 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.363018 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.363037 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.363051 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:06Z","lastTransitionTime":"2025-12-02T10:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.452600 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/1.log" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.453125 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/0.log" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.455487 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12" exitCode=1 Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.455529 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12"} Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.455623 4711 scope.go:117] "RemoveContainer" containerID="68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.460733 4711 scope.go:117] "RemoveContainer" containerID="b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12" Dec 02 10:14:06 crc kubenswrapper[4711]: E1202 10:14:06.461082 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.462130 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" event={"ID":"cc6d8705-9138-499d-bacc-6464f4cca9df","Type":"ContainerStarted","Data":"9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f"} Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.462163 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" event={"ID":"cc6d8705-9138-499d-bacc-6464f4cca9df","Type":"ContainerStarted","Data":"e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb"} Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.464752 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.464772 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.464781 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.464792 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.464800 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:06Z","lastTransitionTime":"2025-12-02T10:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.470554 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.481688 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.495196 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.506175 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.518448 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.527257 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.539174 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.550152 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.565322 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.566825 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.566856 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.566868 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.566884 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.566896 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:06Z","lastTransitionTime":"2025-12-02T10:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.578568 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.590290 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.602373 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.613648 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.626522 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.651764 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 10:14:03.797272 6232 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:14:03.797321 6232 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 10:14:03.797326 6232 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 10:14:03.797337 6232 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:14:03.797356 6232 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:14:03.797363 6232 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:14:03.797406 6232 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:14:03.797413 6232 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 10:14:03.797422 6232 factory.go:656] Stopping watch factory\\\\nI1202 10:14:03.797432 6232 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:14:03.797435 6232 ovnkube.go:599] Stopped ovnkube\\\\nI1202 10:14:03.797440 6232 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 10:14:03.797452 6232 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:14:03.797464 6232 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:14:03.797466 6232 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"message\\\":\\\":[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:14:06.013808 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.664379 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.668909 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.669147 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.669208 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.669268 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.669331 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:06Z","lastTransitionTime":"2025-12-02T10:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.677591 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.689014 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.702867 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.714746 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.726031 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.737204 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.748837 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.762515 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.771356 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.771396 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.771404 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.771422 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.771431 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:06Z","lastTransitionTime":"2025-12-02T10:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.781436 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68077b2b4d9d6f6e16cf293021c16760c6fdf2572b18bf2002c797fabaecf335\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 10:14:03.797272 6232 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:14:03.797321 6232 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 10:14:03.797326 6232 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 10:14:03.797337 6232 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:14:03.797356 6232 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:14:03.797363 6232 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:14:03.797406 6232 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:14:03.797413 6232 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 10:14:03.797422 6232 factory.go:656] Stopping watch factory\\\\nI1202 10:14:03.797432 6232 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:14:03.797435 6232 ovnkube.go:599] Stopped ovnkube\\\\nI1202 10:14:03.797440 6232 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 10:14:03.797452 6232 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:14:03.797464 6232 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:14:03.797466 6232 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"message\\\":\\\":[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:14:06.013808 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.792586 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.809297 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.810024 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:14:06 crc kubenswrapper[4711]: E1202 10:14:06.810208 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:14:22.810162357 +0000 UTC m=+52.519528804 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.810294 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.810337 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:06 crc kubenswrapper[4711]: E1202 10:14:06.810469 4711 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:14:06 crc kubenswrapper[4711]: E1202 10:14:06.810530 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:22.810517577 +0000 UTC m=+52.519884024 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:14:06 crc kubenswrapper[4711]: E1202 10:14:06.810467 4711 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:14:06 crc kubenswrapper[4711]: E1202 10:14:06.810606 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:22.810594199 +0000 UTC m=+52.519960646 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.826436 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.841886 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.855649 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.871206 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.873868 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.873921 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.873935 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.873974 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.873992 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:06Z","lastTransitionTime":"2025-12-02T10:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.881843 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.977209 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.977261 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.977303 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.977326 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:06 crc kubenswrapper[4711]: I1202 10:14:06.977346 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:06Z","lastTransitionTime":"2025-12-02T10:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.012655 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.012721 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.012872 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.012892 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.012910 4711 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.012914 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.013009 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.013029 4711 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.012984 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:23.012967865 +0000 UTC m=+52.722334312 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.013122 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:23.013098589 +0000 UTC m=+52.722465056 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.077790 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.077864 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.077941 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.077993 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.078023 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.078071 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.078141 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.078227 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.079554 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.079596 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.079614 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.079630 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.079643 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:07Z","lastTransitionTime":"2025-12-02T10:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.182074 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.182115 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.182125 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.182141 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.182152 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:07Z","lastTransitionTime":"2025-12-02T10:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.215377 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.215605 4711 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.215705 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs podName:87347875-9865-4380-a0ea-3fde5596dce7 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:09.215672951 +0000 UTC m=+38.925039438 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs") pod "network-metrics-daemon-c82q2" (UID: "87347875-9865-4380-a0ea-3fde5596dce7") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.284847 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.284893 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.284906 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.284923 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.284935 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:07Z","lastTransitionTime":"2025-12-02T10:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.388007 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.388086 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.388107 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.388136 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.388154 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:07Z","lastTransitionTime":"2025-12-02T10:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.470013 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/1.log" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.474745 4711 scope.go:117] "RemoveContainer" containerID="b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12" Dec 02 10:14:07 crc kubenswrapper[4711]: E1202 10:14:07.475241 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.488370 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.490432 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.490491 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.490506 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.490523 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.490536 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:07Z","lastTransitionTime":"2025-12-02T10:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.501502 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.517247 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.536429 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.551862 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.567941 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.581793 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.592844 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.593029 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.593051 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.593069 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.593080 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:07Z","lastTransitionTime":"2025-12-02T10:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.598887 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.610646 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.623236 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.640190 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.663711 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"message\\\":\\\":[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:14:06.013808 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.679062 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.693132 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.694911 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.694961 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.694997 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.695016 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.695048 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:07Z","lastTransitionTime":"2025-12-02T10:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.712535 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.727138 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:07Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.797548 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.797717 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.797744 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.797786 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.797801 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:07Z","lastTransitionTime":"2025-12-02T10:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.899893 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.899926 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.899937 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.899983 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:07 crc kubenswrapper[4711]: I1202 10:14:07.899994 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:07Z","lastTransitionTime":"2025-12-02T10:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.003469 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.003897 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.004119 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.004279 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.004434 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:08Z","lastTransitionTime":"2025-12-02T10:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.107204 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.107253 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.107264 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.107281 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.107292 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:08Z","lastTransitionTime":"2025-12-02T10:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.210208 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.210495 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.210624 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.210733 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.210811 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:08Z","lastTransitionTime":"2025-12-02T10:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.313415 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.313460 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.313471 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.313489 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.313500 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:08Z","lastTransitionTime":"2025-12-02T10:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.416406 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.416460 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.416472 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.416493 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.416505 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:08Z","lastTransitionTime":"2025-12-02T10:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.519737 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.519822 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.519853 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.519890 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.519916 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:08Z","lastTransitionTime":"2025-12-02T10:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.623274 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.623358 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.623385 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.623417 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.623443 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:08Z","lastTransitionTime":"2025-12-02T10:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.726357 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.726400 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.726415 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.726433 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.726446 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:08Z","lastTransitionTime":"2025-12-02T10:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.829559 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.829593 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.829603 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.829617 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.829629 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:08Z","lastTransitionTime":"2025-12-02T10:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.932884 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.932985 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.933010 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.933036 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:08 crc kubenswrapper[4711]: I1202 10:14:08.933054 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:08Z","lastTransitionTime":"2025-12-02T10:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.036161 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.036232 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.036265 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.036299 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.036321 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:09Z","lastTransitionTime":"2025-12-02T10:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.078150 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.078298 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:09 crc kubenswrapper[4711]: E1202 10:14:09.078344 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.078150 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.078386 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:09 crc kubenswrapper[4711]: E1202 10:14:09.078511 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:09 crc kubenswrapper[4711]: E1202 10:14:09.078630 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:09 crc kubenswrapper[4711]: E1202 10:14:09.078754 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.139365 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.139464 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.139478 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.139498 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.139510 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:09Z","lastTransitionTime":"2025-12-02T10:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.237417 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:09 crc kubenswrapper[4711]: E1202 10:14:09.237584 4711 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:09 crc kubenswrapper[4711]: E1202 10:14:09.237664 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs podName:87347875-9865-4380-a0ea-3fde5596dce7 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:13.237643306 +0000 UTC m=+42.947009773 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs") pod "network-metrics-daemon-c82q2" (UID: "87347875-9865-4380-a0ea-3fde5596dce7") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.242400 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.242455 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.242471 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.242494 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.242510 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:09Z","lastTransitionTime":"2025-12-02T10:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.345779 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.346039 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.346056 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.346081 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.346098 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:09Z","lastTransitionTime":"2025-12-02T10:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.448110 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.448168 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.448184 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.448206 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.448220 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:09Z","lastTransitionTime":"2025-12-02T10:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.550337 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.550375 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.550384 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.550398 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.550407 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:09Z","lastTransitionTime":"2025-12-02T10:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.652899 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.652941 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.652967 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.652983 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.652993 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:09Z","lastTransitionTime":"2025-12-02T10:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.755649 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.755690 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.755698 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.755713 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.755722 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:09Z","lastTransitionTime":"2025-12-02T10:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.858594 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.858631 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.858639 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.858654 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.858664 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:09Z","lastTransitionTime":"2025-12-02T10:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.961602 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.961684 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.961708 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.961738 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:09 crc kubenswrapper[4711]: I1202 10:14:09.961757 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:09Z","lastTransitionTime":"2025-12-02T10:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.064719 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.064806 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.064834 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.064869 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.064895 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.167625 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.167668 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.167677 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.167691 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.167700 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.270817 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.270891 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.270909 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.270938 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.270992 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.374126 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.374167 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.374175 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.374193 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.374203 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.477207 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.477266 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.477284 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.477307 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.477323 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.581989 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.582105 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.582120 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.582140 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.582151 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.684068 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.684126 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.684138 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.684157 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.684171 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.786291 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.786347 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.786357 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.786378 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.786390 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.888857 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.888938 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.888976 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.889002 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.889019 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.931887 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.931976 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.931991 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.932011 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.932026 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: E1202 10:14:10.944065 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.947898 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.947945 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.947983 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.948003 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.948015 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: E1202 10:14:10.961669 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.965997 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.966052 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.966063 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.966083 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.966096 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: E1202 10:14:10.978006 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.982421 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.982450 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.982461 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.982476 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.982486 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:10 crc kubenswrapper[4711]: E1202 10:14:10.995601 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.999837 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:10 crc kubenswrapper[4711]: I1202 10:14:10.999883 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:10.999896 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:10.999919 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:10.999932 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:10Z","lastTransitionTime":"2025-12-02T10:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:11 crc kubenswrapper[4711]: E1202 10:14:11.014109 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: E1202 10:14:11.014232 4711 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.015807 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.015841 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.015853 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.015869 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.015880 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:11Z","lastTransitionTime":"2025-12-02T10:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.077441 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:11 crc kubenswrapper[4711]: E1202 10:14:11.077618 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.077663 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.077721 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:11 crc kubenswrapper[4711]: E1202 10:14:11.077806 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.077832 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:11 crc kubenswrapper[4711]: E1202 10:14:11.077906 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:11 crc kubenswrapper[4711]: E1202 10:14:11.078013 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.091399 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.105706 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.118005 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.118052 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.118062 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.118079 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.118090 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:11Z","lastTransitionTime":"2025-12-02T10:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.121436 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.133737 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.148413 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.159226 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.170494 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.189330 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.204463 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.219997 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.222985 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.223061 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.223085 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.223118 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.223141 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:11Z","lastTransitionTime":"2025-12-02T10:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.231216 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.246049 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.257580 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.276989 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.295926 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"message\\\":\\\":[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:14:06.013808 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.307657 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.325907 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.325961 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.325972 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.325987 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.325996 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:11Z","lastTransitionTime":"2025-12-02T10:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.429567 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.429612 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.429623 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.429641 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.429651 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:11Z","lastTransitionTime":"2025-12-02T10:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.532659 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.532875 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.532936 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.533038 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.533104 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:11Z","lastTransitionTime":"2025-12-02T10:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.635307 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.635374 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.635395 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.635411 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.635426 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:11Z","lastTransitionTime":"2025-12-02T10:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.758807 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.758851 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.758860 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.758875 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.758884 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:11Z","lastTransitionTime":"2025-12-02T10:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.861249 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.861288 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.861299 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.861316 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.861328 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:11Z","lastTransitionTime":"2025-12-02T10:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.963804 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.963856 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.963864 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.963881 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:11 crc kubenswrapper[4711]: I1202 10:14:11.963906 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:11Z","lastTransitionTime":"2025-12-02T10:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.066339 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.066384 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.066402 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.066423 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.066434 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:12Z","lastTransitionTime":"2025-12-02T10:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.169008 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.169060 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.169072 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.169091 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.169106 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:12Z","lastTransitionTime":"2025-12-02T10:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.271792 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.271832 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.271843 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.271859 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.271870 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:12Z","lastTransitionTime":"2025-12-02T10:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.374844 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.374917 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.374940 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.375022 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.375041 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:12Z","lastTransitionTime":"2025-12-02T10:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.477789 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.477875 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.477894 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.477920 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.477939 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:12Z","lastTransitionTime":"2025-12-02T10:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.580983 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.581037 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.581050 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.581070 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.581082 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:12Z","lastTransitionTime":"2025-12-02T10:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.683828 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.683899 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.684009 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.684047 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.684075 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:12Z","lastTransitionTime":"2025-12-02T10:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.786939 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.786982 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.786991 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.787007 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.787017 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:12Z","lastTransitionTime":"2025-12-02T10:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.889678 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.889727 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.889737 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.889756 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.889771 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:12Z","lastTransitionTime":"2025-12-02T10:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.992867 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.992917 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.992936 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.992992 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:12 crc kubenswrapper[4711]: I1202 10:14:12.993014 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:12Z","lastTransitionTime":"2025-12-02T10:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.078363 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.078438 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.078363 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:13 crc kubenswrapper[4711]: E1202 10:14:13.078634 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.078712 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:13 crc kubenswrapper[4711]: E1202 10:14:13.078864 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:13 crc kubenswrapper[4711]: E1202 10:14:13.079044 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:13 crc kubenswrapper[4711]: E1202 10:14:13.079203 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.095811 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.095883 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.095893 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.095910 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.095922 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:13Z","lastTransitionTime":"2025-12-02T10:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.200421 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.200489 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.200505 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.200530 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.200547 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:13Z","lastTransitionTime":"2025-12-02T10:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.285121 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:13 crc kubenswrapper[4711]: E1202 10:14:13.285305 4711 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:13 crc kubenswrapper[4711]: E1202 10:14:13.285415 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs podName:87347875-9865-4380-a0ea-3fde5596dce7 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:21.285382279 +0000 UTC m=+50.994748766 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs") pod "network-metrics-daemon-c82q2" (UID: "87347875-9865-4380-a0ea-3fde5596dce7") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.303219 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.303305 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.303327 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.303352 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.303365 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:13Z","lastTransitionTime":"2025-12-02T10:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.405470 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.405510 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.405519 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.405539 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.405551 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:13Z","lastTransitionTime":"2025-12-02T10:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.508304 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.508354 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.508388 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.508410 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.508424 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:13Z","lastTransitionTime":"2025-12-02T10:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.611865 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.611920 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.612056 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.612096 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.612115 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:13Z","lastTransitionTime":"2025-12-02T10:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.714865 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.714912 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.714924 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.714941 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.714969 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:13Z","lastTransitionTime":"2025-12-02T10:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.817802 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.817847 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.817858 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.817877 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.817887 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:13Z","lastTransitionTime":"2025-12-02T10:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.920613 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.920648 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.920662 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.920680 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:13 crc kubenswrapper[4711]: I1202 10:14:13.920692 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:13Z","lastTransitionTime":"2025-12-02T10:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.023237 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.023277 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.023287 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.023303 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.023313 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:14Z","lastTransitionTime":"2025-12-02T10:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.125929 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.126222 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.126367 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.126469 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.126556 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:14Z","lastTransitionTime":"2025-12-02T10:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.230528 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.230568 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.230577 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.230592 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.230601 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:14Z","lastTransitionTime":"2025-12-02T10:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.333666 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.333718 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.333734 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.333756 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.333773 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:14Z","lastTransitionTime":"2025-12-02T10:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.436668 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.436708 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.436717 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.436733 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.436743 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:14Z","lastTransitionTime":"2025-12-02T10:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.539613 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.539657 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.539672 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.539689 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.539700 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:14Z","lastTransitionTime":"2025-12-02T10:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.642660 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.642741 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.642756 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.642783 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.642799 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:14Z","lastTransitionTime":"2025-12-02T10:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.745554 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.745633 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.745652 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.745678 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.745694 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:14Z","lastTransitionTime":"2025-12-02T10:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.849899 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.849976 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.849989 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.850005 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.850016 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:14Z","lastTransitionTime":"2025-12-02T10:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.953008 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.953045 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.953053 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.953067 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:14 crc kubenswrapper[4711]: I1202 10:14:14.953077 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:14Z","lastTransitionTime":"2025-12-02T10:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.055365 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.055400 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.055410 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.055427 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.055436 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:15Z","lastTransitionTime":"2025-12-02T10:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.077554 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.077665 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.077674 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.077575 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:15 crc kubenswrapper[4711]: E1202 10:14:15.077779 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:15 crc kubenswrapper[4711]: E1202 10:14:15.077864 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:15 crc kubenswrapper[4711]: E1202 10:14:15.078047 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:15 crc kubenswrapper[4711]: E1202 10:14:15.078162 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.158109 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.158195 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.158223 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.158254 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.158271 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:15Z","lastTransitionTime":"2025-12-02T10:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.261328 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.261400 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.261420 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.261444 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.261461 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:15Z","lastTransitionTime":"2025-12-02T10:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.364371 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.364455 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.364467 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.364489 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.364502 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:15Z","lastTransitionTime":"2025-12-02T10:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.467657 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.467748 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.467760 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.467805 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.467820 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:15Z","lastTransitionTime":"2025-12-02T10:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.571146 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.571248 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.571262 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.571283 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.571298 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:15Z","lastTransitionTime":"2025-12-02T10:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.674340 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.674448 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.674462 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.674482 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.674493 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:15Z","lastTransitionTime":"2025-12-02T10:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.777243 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.777322 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.777345 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.777376 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.777406 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:15Z","lastTransitionTime":"2025-12-02T10:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.880579 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.880627 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.880639 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.880655 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.880666 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:15Z","lastTransitionTime":"2025-12-02T10:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.984180 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.984237 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.984248 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.984273 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:15 crc kubenswrapper[4711]: I1202 10:14:15.984285 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:15Z","lastTransitionTime":"2025-12-02T10:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.087322 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.087367 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.087380 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.087398 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.087409 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:16Z","lastTransitionTime":"2025-12-02T10:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.190537 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.190585 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.190597 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.190617 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.190630 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:16Z","lastTransitionTime":"2025-12-02T10:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.293235 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.293286 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.293306 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.293328 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.293344 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:16Z","lastTransitionTime":"2025-12-02T10:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.395770 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.395807 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.395817 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.395833 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.395844 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:16Z","lastTransitionTime":"2025-12-02T10:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.498060 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.498101 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.498112 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.498127 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.498138 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:16Z","lastTransitionTime":"2025-12-02T10:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.601139 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.601184 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.601193 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.601207 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.601215 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:16Z","lastTransitionTime":"2025-12-02T10:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.704088 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.704148 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.704165 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.704187 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.704201 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:16Z","lastTransitionTime":"2025-12-02T10:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.806560 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.806618 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.806635 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.806665 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.806683 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:16Z","lastTransitionTime":"2025-12-02T10:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.909215 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.909289 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.909313 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.909418 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:16 crc kubenswrapper[4711]: I1202 10:14:16.909530 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:16Z","lastTransitionTime":"2025-12-02T10:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.012433 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.012474 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.012502 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.012519 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.012528 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:17Z","lastTransitionTime":"2025-12-02T10:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.078066 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:17 crc kubenswrapper[4711]: E1202 10:14:17.078250 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.078084 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.078296 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.078062 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:17 crc kubenswrapper[4711]: E1202 10:14:17.078333 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:17 crc kubenswrapper[4711]: E1202 10:14:17.078380 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:17 crc kubenswrapper[4711]: E1202 10:14:17.078433 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.115049 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.115076 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.115085 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.115098 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.115106 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:17Z","lastTransitionTime":"2025-12-02T10:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.217772 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.217836 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.217849 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.217871 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.217884 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:17Z","lastTransitionTime":"2025-12-02T10:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.320154 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.320219 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.320235 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.320261 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.320282 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:17Z","lastTransitionTime":"2025-12-02T10:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.422916 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.423012 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.423031 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.423063 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.423082 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:17Z","lastTransitionTime":"2025-12-02T10:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.444734 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.452768 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.458810 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.474293 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.487702 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.501648 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.512292 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.525280 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.526335 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.526367 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.526378 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.526406 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.526417 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:17Z","lastTransitionTime":"2025-12-02T10:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.560107 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.583457 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.597419 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.608810 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.619386 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.627551 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.628442 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.628469 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.628478 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.628491 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.628508 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:17Z","lastTransitionTime":"2025-12-02T10:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.638994 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.656174 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"message\\\":\\\":[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:14:06.013808 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.665279 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.674928 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.730610 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.730724 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.730742 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.730765 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.730782 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:17Z","lastTransitionTime":"2025-12-02T10:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.833178 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.833252 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.833275 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.833306 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.833329 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:17Z","lastTransitionTime":"2025-12-02T10:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.937627 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.937902 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.937925 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.937996 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:17 crc kubenswrapper[4711]: I1202 10:14:17.938096 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:17Z","lastTransitionTime":"2025-12-02T10:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.040256 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.040344 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.040361 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.040391 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.040408 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:18Z","lastTransitionTime":"2025-12-02T10:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.143219 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.143273 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.143287 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.143307 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.143321 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:18Z","lastTransitionTime":"2025-12-02T10:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.245827 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.245875 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.245886 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.245905 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.245918 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:18Z","lastTransitionTime":"2025-12-02T10:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.350252 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.350288 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.350301 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.350327 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.350340 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:18Z","lastTransitionTime":"2025-12-02T10:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.452564 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.452602 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.452611 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.452627 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.452638 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:18Z","lastTransitionTime":"2025-12-02T10:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.557518 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.557584 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.557603 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.557647 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.557668 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:18Z","lastTransitionTime":"2025-12-02T10:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.660892 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.661027 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.661040 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.661056 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.661068 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:18Z","lastTransitionTime":"2025-12-02T10:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.763612 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.763653 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.763665 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.763681 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.763691 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:18Z","lastTransitionTime":"2025-12-02T10:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.866634 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.866687 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.866699 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.866718 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.866731 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:18Z","lastTransitionTime":"2025-12-02T10:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.969174 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.969247 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.969271 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.969302 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:18 crc kubenswrapper[4711]: I1202 10:14:18.969326 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:18Z","lastTransitionTime":"2025-12-02T10:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.073283 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.073336 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.073349 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.073367 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.073379 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:19Z","lastTransitionTime":"2025-12-02T10:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.078296 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.078311 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.078297 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.078610 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:19 crc kubenswrapper[4711]: E1202 10:14:19.078639 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:19 crc kubenswrapper[4711]: E1202 10:14:19.078693 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:19 crc kubenswrapper[4711]: E1202 10:14:19.078830 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:19 crc kubenswrapper[4711]: E1202 10:14:19.078906 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.176323 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.176365 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.176376 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.176395 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.176407 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:19Z","lastTransitionTime":"2025-12-02T10:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.279513 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.279577 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.279587 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.279604 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.279615 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:19Z","lastTransitionTime":"2025-12-02T10:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.381481 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.381522 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.381532 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.381549 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.381558 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:19Z","lastTransitionTime":"2025-12-02T10:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.484652 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.484723 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.484735 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.484753 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.484764 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:19Z","lastTransitionTime":"2025-12-02T10:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.587711 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.588165 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.588230 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.588264 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.588286 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:19Z","lastTransitionTime":"2025-12-02T10:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.691477 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.691518 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.691531 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.691548 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.691561 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:19Z","lastTransitionTime":"2025-12-02T10:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.794423 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.794472 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.794489 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.794514 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.794529 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:19Z","lastTransitionTime":"2025-12-02T10:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.897166 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.897208 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.897222 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.897238 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:19 crc kubenswrapper[4711]: I1202 10:14:19.897249 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:19Z","lastTransitionTime":"2025-12-02T10:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.000158 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.000207 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.000219 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.000237 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.000250 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:20Z","lastTransitionTime":"2025-12-02T10:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.103263 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.103321 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.103338 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.103366 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.103384 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:20Z","lastTransitionTime":"2025-12-02T10:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.206356 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.206385 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.206394 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.206407 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.206415 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:20Z","lastTransitionTime":"2025-12-02T10:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.309336 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.309387 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.309403 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.309428 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.309446 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:20Z","lastTransitionTime":"2025-12-02T10:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.412604 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.412658 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.412672 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.412691 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.412704 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:20Z","lastTransitionTime":"2025-12-02T10:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.515639 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.515679 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.515708 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.515728 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.515742 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:20Z","lastTransitionTime":"2025-12-02T10:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.618330 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.618381 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.618392 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.618406 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.618416 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:20Z","lastTransitionTime":"2025-12-02T10:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.720789 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.720844 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.720860 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.720888 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.720904 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:20Z","lastTransitionTime":"2025-12-02T10:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.833243 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.833281 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.833292 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.833308 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.833318 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:20Z","lastTransitionTime":"2025-12-02T10:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.936294 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.936356 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.936369 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.936384 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:20 crc kubenswrapper[4711]: I1202 10:14:20.936394 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:20Z","lastTransitionTime":"2025-12-02T10:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.039461 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.039499 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.039509 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.039523 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.039534 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.078404 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.078597 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.078708 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.079235 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:21 crc kubenswrapper[4711]: E1202 10:14:21.079431 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:21 crc kubenswrapper[4711]: E1202 10:14:21.079524 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:21 crc kubenswrapper[4711]: E1202 10:14:21.079632 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:21 crc kubenswrapper[4711]: E1202 10:14:21.079723 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.081638 4711 scope.go:117] "RemoveContainer" containerID="b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.096600 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.120771 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.135106 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.141406 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.141435 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.141443 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.141456 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.141464 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.152505 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.153889 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.153935 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.153944 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.153977 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.153987 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.166350 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: E1202 10:14:21.166458 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.170186 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.170221 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.170261 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.170283 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.170295 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.180890 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: E1202 10:14:21.186039 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.189459 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.189491 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.189500 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.189514 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.189527 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.191383 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.202747 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: E1202 10:14:21.203682 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.206962 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.206996 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.207006 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.207021 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.207031 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.217310 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: E1202 10:14:21.219795 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.223306 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.223346 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.223358 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.223374 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.223384 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: E1202 10:14:21.234788 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: E1202 10:14:21.234897 4711 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.240320 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"message\\\":\\\":[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:14:06.013808 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.243029 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.243059 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.243069 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.243084 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.243094 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.250915 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.263261 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.278427 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.294474 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.306176 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.321416 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.332806 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.345661 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.345691 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.345701 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.345716 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.345727 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.377264 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:21 crc kubenswrapper[4711]: E1202 10:14:21.377418 4711 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:21 crc kubenswrapper[4711]: E1202 10:14:21.377525 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs podName:87347875-9865-4380-a0ea-3fde5596dce7 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:37.377481898 +0000 UTC m=+67.086848345 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs") pod "network-metrics-daemon-c82q2" (UID: "87347875-9865-4380-a0ea-3fde5596dce7") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.448419 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.448459 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.448470 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.448488 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.448499 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.522296 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/1.log" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.524681 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.525341 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.539700 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.550738 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.550780 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.550789 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.550804 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.550814 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.555249 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.566378 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.579726 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.591080 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.603158 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.615603 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.630991 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.649917 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"message\\\":\\\":[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:14:06.013808 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.653502 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.653535 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.653546 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.653562 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.653573 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.661077 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.672474 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.684054 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.700441 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.713006 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.723690 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.733800 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.744747 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.755488 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.755526 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.755539 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.755558 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.755569 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.857384 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.857417 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.857427 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.857442 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.857450 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.960825 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.960894 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.960917 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.960978 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:21 crc kubenswrapper[4711]: I1202 10:14:21.961014 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:21Z","lastTransitionTime":"2025-12-02T10:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.064644 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.064711 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.064732 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.064761 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.064813 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:22Z","lastTransitionTime":"2025-12-02T10:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.168225 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.168300 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.168318 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.168343 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.168361 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:22Z","lastTransitionTime":"2025-12-02T10:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.270412 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.270454 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.270467 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.270485 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.270498 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:22Z","lastTransitionTime":"2025-12-02T10:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.373778 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.373878 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.373906 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.373939 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.374027 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:22Z","lastTransitionTime":"2025-12-02T10:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.476541 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.476610 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.476631 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.476663 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.476686 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:22Z","lastTransitionTime":"2025-12-02T10:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.530057 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/2.log" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.530679 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/1.log" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.533763 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82" exitCode=1 Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.533804 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82"} Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.533839 4711 scope.go:117] "RemoveContainer" containerID="b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.534601 4711 scope.go:117] "RemoveContainer" containerID="0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82" Dec 02 10:14:22 crc kubenswrapper[4711]: E1202 10:14:22.534787 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.557643 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.578670 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.579564 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.579770 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.579991 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.580168 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.580327 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:22Z","lastTransitionTime":"2025-12-02T10:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.593903 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.606496 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.621887 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.633338 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.645589 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.666015 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.683809 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.683923 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.683944 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.684033 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.684096 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:22Z","lastTransitionTime":"2025-12-02T10:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.688038 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b388947f5fa31ff8abc86a4a542d64d2d9b7f7375beb992fe1883429c75fff12\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"message\\\":\\\":[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:14:06.013808 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:22Z\\\",\\\"message\\\":\\\":14:21.909168 6638 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909177 6638 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909186 6638 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh in node crc\\\\nI1202 10:14:21.909191 6638 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh after 0 failed attempt(s)\\\\nF1202 10:14:21.909193 6638 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z]\\\\nI1202 10:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.700985 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.712303 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.722432 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.736023 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.751791 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.787290 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.787327 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.787341 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.787361 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.787376 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:22Z","lastTransitionTime":"2025-12-02T10:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.805724 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.817203 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.829044 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:22Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.890097 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.890154 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.890166 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.890186 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.890199 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:22Z","lastTransitionTime":"2025-12-02T10:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.902531 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.902641 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.902708 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:22 crc kubenswrapper[4711]: E1202 10:14:22.902786 4711 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:14:22 crc kubenswrapper[4711]: E1202 10:14:22.902842 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:54.90282756 +0000 UTC m=+84.612194007 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:14:22 crc kubenswrapper[4711]: E1202 10:14:22.903030 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:14:54.903021175 +0000 UTC m=+84.612387622 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:14:22 crc kubenswrapper[4711]: E1202 10:14:22.903100 4711 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:14:22 crc kubenswrapper[4711]: E1202 10:14:22.903122 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:54.903116458 +0000 UTC m=+84.612482905 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.993495 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.993545 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.993559 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.993576 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:22 crc kubenswrapper[4711]: I1202 10:14:22.993588 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:22Z","lastTransitionTime":"2025-12-02T10:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.077614 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.077672 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.077623 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.077673 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.077794 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.077861 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.078001 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.078152 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.095403 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.095438 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.095446 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.095461 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.095471 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:23Z","lastTransitionTime":"2025-12-02T10:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.104924 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.105004 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.105162 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.105163 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.105211 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.105239 4711 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.105349 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:55.10532597 +0000 UTC m=+84.814692447 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.105179 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.105453 4711 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.105499 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:14:55.105485964 +0000 UTC m=+84.814852411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.198882 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.198930 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.198942 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.198977 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.198990 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:23Z","lastTransitionTime":"2025-12-02T10:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.301906 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.302010 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.302029 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.302058 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.302075 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:23Z","lastTransitionTime":"2025-12-02T10:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.405080 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.405119 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.405130 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.405147 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.405158 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:23Z","lastTransitionTime":"2025-12-02T10:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.508151 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.508207 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.508220 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.508239 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.508257 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:23Z","lastTransitionTime":"2025-12-02T10:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.538700 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/2.log" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.541814 4711 scope.go:117] "RemoveContainer" containerID="0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82" Dec 02 10:14:23 crc kubenswrapper[4711]: E1202 10:14:23.542034 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.557310 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.568239 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.579391 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.591007 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.606639 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.611453 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.611497 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.611508 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.611524 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.611536 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:23Z","lastTransitionTime":"2025-12-02T10:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.622865 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.634363 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.648083 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.669226 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:22Z\\\",\\\"message\\\":\\\":14:21.909168 6638 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909177 6638 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909186 6638 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh in node crc\\\\nI1202 10:14:21.909191 6638 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh after 0 failed attempt(s)\\\\nF1202 10:14:21.909193 6638 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z]\\\\nI1202 10:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.681390 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.693404 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.705263 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.713577 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.713629 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.713641 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.713659 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.713676 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:23Z","lastTransitionTime":"2025-12-02T10:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.719217 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.728942 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.740275 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.750978 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.762557 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:23Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.816214 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.816257 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.816266 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.816279 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.816290 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:23Z","lastTransitionTime":"2025-12-02T10:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.918194 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.918258 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.918266 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.918281 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:23 crc kubenswrapper[4711]: I1202 10:14:23.918290 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:23Z","lastTransitionTime":"2025-12-02T10:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.020404 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.020452 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.020466 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.020489 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.020504 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:24Z","lastTransitionTime":"2025-12-02T10:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.123885 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.123915 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.123925 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.123941 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.123969 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:24Z","lastTransitionTime":"2025-12-02T10:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.227410 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.227455 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.227463 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.227478 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.227488 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:24Z","lastTransitionTime":"2025-12-02T10:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.329433 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.329726 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.329789 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.329857 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.329985 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:24Z","lastTransitionTime":"2025-12-02T10:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.433227 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.433267 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.433279 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.433299 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.433311 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:24Z","lastTransitionTime":"2025-12-02T10:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.535686 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.535734 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.535746 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.535764 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.535776 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:24Z","lastTransitionTime":"2025-12-02T10:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.639078 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.639156 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.639167 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.639190 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.639203 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:24Z","lastTransitionTime":"2025-12-02T10:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.741649 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.741720 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.741738 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.741763 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.741780 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:24Z","lastTransitionTime":"2025-12-02T10:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.843663 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.843733 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.843758 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.843789 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.843810 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:24Z","lastTransitionTime":"2025-12-02T10:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.946170 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.946210 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.946221 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.946236 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:24 crc kubenswrapper[4711]: I1202 10:14:24.946247 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:24Z","lastTransitionTime":"2025-12-02T10:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.048554 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.048594 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.048605 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.048621 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.048636 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:25Z","lastTransitionTime":"2025-12-02T10:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.077708 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.077792 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:25 crc kubenswrapper[4711]: E1202 10:14:25.077918 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.077709 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:25 crc kubenswrapper[4711]: E1202 10:14:25.078110 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:25 crc kubenswrapper[4711]: E1202 10:14:25.078257 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.078337 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:25 crc kubenswrapper[4711]: E1202 10:14:25.078484 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.151564 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.151650 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.151662 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.151686 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.151705 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:25Z","lastTransitionTime":"2025-12-02T10:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.254843 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.254902 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.254913 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.254928 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.254937 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:25Z","lastTransitionTime":"2025-12-02T10:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.357552 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.357612 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.357631 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.357655 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.357670 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:25Z","lastTransitionTime":"2025-12-02T10:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.459267 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.459307 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.459315 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.459328 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.459338 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:25Z","lastTransitionTime":"2025-12-02T10:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.563487 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.563534 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.563546 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.563565 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.563578 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:25Z","lastTransitionTime":"2025-12-02T10:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.666813 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.666860 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.666871 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.666890 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.666903 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:25Z","lastTransitionTime":"2025-12-02T10:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.769098 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.769143 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.769152 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.769167 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.769177 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:25Z","lastTransitionTime":"2025-12-02T10:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.872014 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.872085 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.872101 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.872129 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.872147 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:25Z","lastTransitionTime":"2025-12-02T10:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.975063 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.975100 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.975110 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.975124 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:25 crc kubenswrapper[4711]: I1202 10:14:25.975133 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:25Z","lastTransitionTime":"2025-12-02T10:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.077343 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.077382 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.077394 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.077465 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.077494 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:26Z","lastTransitionTime":"2025-12-02T10:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.181194 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.181260 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.181280 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.181361 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.181391 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:26Z","lastTransitionTime":"2025-12-02T10:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.283969 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.284010 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.284021 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.284038 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.284051 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:26Z","lastTransitionTime":"2025-12-02T10:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.386802 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.386847 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.386857 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.386871 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.386884 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:26Z","lastTransitionTime":"2025-12-02T10:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.489668 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.489709 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.489719 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.489738 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.489750 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:26Z","lastTransitionTime":"2025-12-02T10:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.592420 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.592466 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.592477 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.592503 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.592513 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:26Z","lastTransitionTime":"2025-12-02T10:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.696718 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.696764 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.696929 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.696985 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.696998 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:26Z","lastTransitionTime":"2025-12-02T10:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.799483 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.799527 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.799543 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.799560 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.799570 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:26Z","lastTransitionTime":"2025-12-02T10:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.902477 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.902512 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.902521 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.902535 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:26 crc kubenswrapper[4711]: I1202 10:14:26.902550 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:26Z","lastTransitionTime":"2025-12-02T10:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.005575 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.005624 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.005633 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.005647 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.005656 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:27Z","lastTransitionTime":"2025-12-02T10:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.077707 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.077750 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.077708 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:27 crc kubenswrapper[4711]: E1202 10:14:27.077854 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:27 crc kubenswrapper[4711]: E1202 10:14:27.077918 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:27 crc kubenswrapper[4711]: E1202 10:14:27.077996 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.078181 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:27 crc kubenswrapper[4711]: E1202 10:14:27.078354 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.108570 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.108615 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.108629 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.108648 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.108660 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:27Z","lastTransitionTime":"2025-12-02T10:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.211608 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.211664 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.211674 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.211688 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.211699 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:27Z","lastTransitionTime":"2025-12-02T10:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.314280 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.314323 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.314339 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.314354 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.314365 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:27Z","lastTransitionTime":"2025-12-02T10:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.417117 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.417161 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.417173 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.417190 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.417201 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:27Z","lastTransitionTime":"2025-12-02T10:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.519464 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.519547 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.519560 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.519579 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.519588 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:27Z","lastTransitionTime":"2025-12-02T10:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.622285 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.622325 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.622336 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.622350 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.622361 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:27Z","lastTransitionTime":"2025-12-02T10:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.725555 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.725614 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.725644 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.725687 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.725718 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:27Z","lastTransitionTime":"2025-12-02T10:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.829330 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.829393 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.829406 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.829428 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.829444 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:27Z","lastTransitionTime":"2025-12-02T10:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.932103 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.932172 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.932185 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.932204 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:27 crc kubenswrapper[4711]: I1202 10:14:27.932217 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:27Z","lastTransitionTime":"2025-12-02T10:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.034562 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.034621 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.034630 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.034645 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.034654 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:28Z","lastTransitionTime":"2025-12-02T10:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.138567 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.138612 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.138624 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.138641 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.138656 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:28Z","lastTransitionTime":"2025-12-02T10:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.241281 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.241331 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.241342 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.241359 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.241371 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:28Z","lastTransitionTime":"2025-12-02T10:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.344870 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.344931 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.344948 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.344988 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.345004 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:28Z","lastTransitionTime":"2025-12-02T10:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.447653 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.447708 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.447720 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.447741 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.447754 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:28Z","lastTransitionTime":"2025-12-02T10:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.550467 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.550526 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.550560 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.550579 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.550591 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:28Z","lastTransitionTime":"2025-12-02T10:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.652415 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.652460 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.652469 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.652485 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.652493 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:28Z","lastTransitionTime":"2025-12-02T10:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.755995 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.756046 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.756063 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.756091 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.756114 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:28Z","lastTransitionTime":"2025-12-02T10:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.858319 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.858362 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.858372 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.858386 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.858396 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:28Z","lastTransitionTime":"2025-12-02T10:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.961110 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.961161 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.961173 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.961189 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:28 crc kubenswrapper[4711]: I1202 10:14:28.961198 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:28Z","lastTransitionTime":"2025-12-02T10:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.063180 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.063219 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.063228 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.063244 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.063258 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:29Z","lastTransitionTime":"2025-12-02T10:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.077461 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.077524 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.077561 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:29 crc kubenswrapper[4711]: E1202 10:14:29.077596 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.077569 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:29 crc kubenswrapper[4711]: E1202 10:14:29.077778 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:29 crc kubenswrapper[4711]: E1202 10:14:29.077768 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:29 crc kubenswrapper[4711]: E1202 10:14:29.077828 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.166635 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.166695 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.166718 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.166749 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.166776 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:29Z","lastTransitionTime":"2025-12-02T10:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.270012 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.270049 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.270058 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.270073 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.270083 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:29Z","lastTransitionTime":"2025-12-02T10:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.372082 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.372120 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.372129 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.372144 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.372152 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:29Z","lastTransitionTime":"2025-12-02T10:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.474201 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.474271 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.474288 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.474314 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.474331 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:29Z","lastTransitionTime":"2025-12-02T10:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.576795 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.576839 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.576853 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.576870 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.576881 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:29Z","lastTransitionTime":"2025-12-02T10:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.680093 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.680148 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.680165 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.680188 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.680199 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:29Z","lastTransitionTime":"2025-12-02T10:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.783159 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.783204 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.783214 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.783230 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.783240 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:29Z","lastTransitionTime":"2025-12-02T10:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.886089 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.886144 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.886156 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.886181 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.886194 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:29Z","lastTransitionTime":"2025-12-02T10:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.990369 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.990448 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.990461 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.990482 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:29 crc kubenswrapper[4711]: I1202 10:14:29.990496 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:29Z","lastTransitionTime":"2025-12-02T10:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.093702 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.093743 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.093752 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.093764 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.093774 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:30Z","lastTransitionTime":"2025-12-02T10:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.196253 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.196307 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.196320 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.196351 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.196370 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:30Z","lastTransitionTime":"2025-12-02T10:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.299537 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.299617 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.299642 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.299677 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.299695 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:30Z","lastTransitionTime":"2025-12-02T10:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.403052 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.403107 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.403116 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.403130 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.403139 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:30Z","lastTransitionTime":"2025-12-02T10:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.505666 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.505707 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.505716 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.505731 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.505741 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:30Z","lastTransitionTime":"2025-12-02T10:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.608484 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.608528 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.608539 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.608557 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.608570 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:30Z","lastTransitionTime":"2025-12-02T10:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.710855 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.710895 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.710906 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.710922 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.710933 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:30Z","lastTransitionTime":"2025-12-02T10:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.813413 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.813719 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.813800 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.813880 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.813980 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:30Z","lastTransitionTime":"2025-12-02T10:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.915832 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.915870 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.915879 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.915893 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:30 crc kubenswrapper[4711]: I1202 10:14:30.915901 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:30Z","lastTransitionTime":"2025-12-02T10:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.018370 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.018426 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.018444 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.018470 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.018492 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.077530 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.077591 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:31 crc kubenswrapper[4711]: E1202 10:14:31.078872 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.078892 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.078996 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:31 crc kubenswrapper[4711]: E1202 10:14:31.079278 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:31 crc kubenswrapper[4711]: E1202 10:14:31.079379 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:31 crc kubenswrapper[4711]: E1202 10:14:31.079455 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.098163 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.111414 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.121660 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.121698 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.121708 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.121741 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.121752 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.125123 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.137659 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.151979 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.163258 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.175279 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.192647 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.204832 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.217300 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.224737 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.224769 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.224778 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.224796 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.224808 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.228306 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.238351 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.247440 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.263323 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:22Z\\\",\\\"message\\\":\\\":14:21.909168 6638 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909177 6638 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909186 6638 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh in node crc\\\\nI1202 10:14:21.909191 6638 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh after 0 failed attempt(s)\\\\nF1202 10:14:21.909193 6638 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z]\\\\nI1202 10:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.272686 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.283278 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.293821 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.326883 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.326926 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.326943 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.326980 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.326992 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.429557 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.429607 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.429616 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.429630 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.429641 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.532459 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.532497 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.532514 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.532530 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.532540 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.585973 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.586019 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.586028 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.586043 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.586053 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: E1202 10:14:31.603320 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.607632 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.607673 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.607685 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.607703 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.607716 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: E1202 10:14:31.622847 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.626787 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.626836 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.626848 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.626865 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.626878 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: E1202 10:14:31.639416 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.643684 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.643724 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.643732 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.643748 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.643761 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: E1202 10:14:31.657264 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.660694 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.660737 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.660753 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.660770 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.660781 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: E1202 10:14:31.671682 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:31Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:31 crc kubenswrapper[4711]: E1202 10:14:31.671831 4711 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.673404 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.673434 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.673445 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.673467 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.673480 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.775791 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.775848 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.775858 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.775875 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.775884 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.877772 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.877803 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.877811 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.877823 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.877832 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.980001 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.980028 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.980036 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.980049 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:31 crc kubenswrapper[4711]: I1202 10:14:31.980057 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:31Z","lastTransitionTime":"2025-12-02T10:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.083070 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.083129 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.083137 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.083149 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.083158 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:32Z","lastTransitionTime":"2025-12-02T10:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.185755 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.185818 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.185833 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.185853 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.185867 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:32Z","lastTransitionTime":"2025-12-02T10:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.288573 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.288614 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.288624 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.288641 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.288650 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:32Z","lastTransitionTime":"2025-12-02T10:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.391016 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.391073 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.391089 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.391111 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.391127 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:32Z","lastTransitionTime":"2025-12-02T10:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.493210 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.493265 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.493278 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.493297 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.493310 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:32Z","lastTransitionTime":"2025-12-02T10:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.595780 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.595821 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.595833 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.595849 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.595860 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:32Z","lastTransitionTime":"2025-12-02T10:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.698193 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.698243 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.698258 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.698276 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.698291 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:32Z","lastTransitionTime":"2025-12-02T10:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.804685 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.804723 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.804735 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.804757 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.804770 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:32Z","lastTransitionTime":"2025-12-02T10:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.906861 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.906911 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.906923 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.906943 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:32 crc kubenswrapper[4711]: I1202 10:14:32.906985 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:32Z","lastTransitionTime":"2025-12-02T10:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.011637 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.011678 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.011689 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.011707 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.011718 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:33Z","lastTransitionTime":"2025-12-02T10:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.077733 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.077755 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:33 crc kubenswrapper[4711]: E1202 10:14:33.077884 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.077911 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.077863 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:33 crc kubenswrapper[4711]: E1202 10:14:33.078131 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:33 crc kubenswrapper[4711]: E1202 10:14:33.078232 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:33 crc kubenswrapper[4711]: E1202 10:14:33.078374 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.113800 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.113849 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.113858 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.113871 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.113880 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:33Z","lastTransitionTime":"2025-12-02T10:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.216542 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.216591 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.216602 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.216619 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.216634 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:33Z","lastTransitionTime":"2025-12-02T10:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.318709 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.318752 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.318762 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.318775 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.318784 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:33Z","lastTransitionTime":"2025-12-02T10:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.420941 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.420990 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.420998 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.421015 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.421024 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:33Z","lastTransitionTime":"2025-12-02T10:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.523568 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.523609 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.523621 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.523638 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.523651 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:33Z","lastTransitionTime":"2025-12-02T10:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.625926 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.625992 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.626004 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.626020 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.626032 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:33Z","lastTransitionTime":"2025-12-02T10:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.728268 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.728322 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.728335 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.728349 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.728358 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:33Z","lastTransitionTime":"2025-12-02T10:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.871362 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.871403 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.871411 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.871427 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.871437 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:33Z","lastTransitionTime":"2025-12-02T10:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.974710 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.974759 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.974770 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.974790 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:33 crc kubenswrapper[4711]: I1202 10:14:33.974805 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:33Z","lastTransitionTime":"2025-12-02T10:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.079852 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.079912 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.079926 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.079977 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.079994 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:34Z","lastTransitionTime":"2025-12-02T10:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.183089 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.183131 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.183144 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.183159 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.183173 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:34Z","lastTransitionTime":"2025-12-02T10:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.286441 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.286503 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.286517 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.286537 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.286551 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:34Z","lastTransitionTime":"2025-12-02T10:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.392983 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.393033 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.393047 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.393064 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.393075 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:34Z","lastTransitionTime":"2025-12-02T10:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.497045 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.497138 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.497148 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.497167 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.497188 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:34Z","lastTransitionTime":"2025-12-02T10:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.600626 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.600714 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.600726 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.600745 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.600755 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:34Z","lastTransitionTime":"2025-12-02T10:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.704504 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.704609 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.704634 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.704674 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.704703 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:34Z","lastTransitionTime":"2025-12-02T10:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.808732 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.808774 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.808801 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.808816 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.808834 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:34Z","lastTransitionTime":"2025-12-02T10:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.913625 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.913712 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.913723 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.913766 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:34 crc kubenswrapper[4711]: I1202 10:14:34.913786 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:34Z","lastTransitionTime":"2025-12-02T10:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.017066 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.017114 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.017125 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.017142 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.017159 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:35Z","lastTransitionTime":"2025-12-02T10:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.078786 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.078928 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.078786 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:35 crc kubenswrapper[4711]: E1202 10:14:35.079261 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.079364 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:35 crc kubenswrapper[4711]: E1202 10:14:35.079456 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:35 crc kubenswrapper[4711]: E1202 10:14:35.079843 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:35 crc kubenswrapper[4711]: E1202 10:14:35.080085 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.081874 4711 scope.go:117] "RemoveContainer" containerID="0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82" Dec 02 10:14:35 crc kubenswrapper[4711]: E1202 10:14:35.082468 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.120427 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.120466 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.120477 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.120499 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.120514 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:35Z","lastTransitionTime":"2025-12-02T10:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.222819 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.222855 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.222863 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.222877 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.222886 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:35Z","lastTransitionTime":"2025-12-02T10:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.332991 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.333033 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.333042 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.333057 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.333067 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:35Z","lastTransitionTime":"2025-12-02T10:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.435262 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.435291 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.435301 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.435317 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.435328 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:35Z","lastTransitionTime":"2025-12-02T10:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.538470 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.538498 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.538509 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.538526 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.538537 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:35Z","lastTransitionTime":"2025-12-02T10:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.640710 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.640781 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.640803 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.640847 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.640870 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:35Z","lastTransitionTime":"2025-12-02T10:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.743701 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.743755 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.743768 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.743786 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.743798 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:35Z","lastTransitionTime":"2025-12-02T10:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.846480 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.846515 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.846528 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.846545 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.846554 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:35Z","lastTransitionTime":"2025-12-02T10:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.949724 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.949767 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.949779 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.949797 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:35 crc kubenswrapper[4711]: I1202 10:14:35.949837 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:35Z","lastTransitionTime":"2025-12-02T10:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.052760 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.052798 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.052819 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.052833 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.052842 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:36Z","lastTransitionTime":"2025-12-02T10:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.155986 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.156037 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.156050 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.156068 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.156085 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:36Z","lastTransitionTime":"2025-12-02T10:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.259016 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.259072 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.259089 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.259109 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.259124 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:36Z","lastTransitionTime":"2025-12-02T10:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.361415 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.361459 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.361471 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.361488 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.361514 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:36Z","lastTransitionTime":"2025-12-02T10:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.464082 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.464114 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.464122 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.464136 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.464144 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:36Z","lastTransitionTime":"2025-12-02T10:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.566170 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.566199 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.566208 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.566221 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.566229 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:36Z","lastTransitionTime":"2025-12-02T10:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.668231 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.668277 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.668289 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.668307 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.668319 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:36Z","lastTransitionTime":"2025-12-02T10:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.770585 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.770634 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.770642 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.770655 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.770665 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:36Z","lastTransitionTime":"2025-12-02T10:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.873002 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.873036 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.873044 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.873058 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.873070 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:36Z","lastTransitionTime":"2025-12-02T10:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.975867 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.975912 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.975923 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.975940 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:36 crc kubenswrapper[4711]: I1202 10:14:36.975966 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:36Z","lastTransitionTime":"2025-12-02T10:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.077572 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.077639 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:37 crc kubenswrapper[4711]: E1202 10:14:37.077730 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.077563 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:37 crc kubenswrapper[4711]: E1202 10:14:37.077795 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.077668 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:37 crc kubenswrapper[4711]: E1202 10:14:37.077912 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:37 crc kubenswrapper[4711]: E1202 10:14:37.078032 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.078794 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.078895 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.078999 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.079099 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.079172 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:37Z","lastTransitionTime":"2025-12-02T10:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.181597 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.181647 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.181660 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.181677 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.181695 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:37Z","lastTransitionTime":"2025-12-02T10:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.284141 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.284186 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.284198 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.284214 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.284225 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:37Z","lastTransitionTime":"2025-12-02T10:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.387204 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.387263 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.387272 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.387287 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.387296 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:37Z","lastTransitionTime":"2025-12-02T10:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.454073 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:37 crc kubenswrapper[4711]: E1202 10:14:37.454217 4711 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:37 crc kubenswrapper[4711]: E1202 10:14:37.454321 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs podName:87347875-9865-4380-a0ea-3fde5596dce7 nodeName:}" failed. No retries permitted until 2025-12-02 10:15:09.454276128 +0000 UTC m=+99.163642575 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs") pod "network-metrics-daemon-c82q2" (UID: "87347875-9865-4380-a0ea-3fde5596dce7") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.490497 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.490542 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.490555 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.490573 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.490585 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:37Z","lastTransitionTime":"2025-12-02T10:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.592941 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.593010 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.593027 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.593044 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.593054 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:37Z","lastTransitionTime":"2025-12-02T10:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.695073 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.695116 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.695127 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.695147 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.695160 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:37Z","lastTransitionTime":"2025-12-02T10:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.797376 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.797413 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.797425 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.797440 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.797452 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:37Z","lastTransitionTime":"2025-12-02T10:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.899618 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.899703 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.899717 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.899735 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:37 crc kubenswrapper[4711]: I1202 10:14:37.899748 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:37Z","lastTransitionTime":"2025-12-02T10:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.002485 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.002549 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.002564 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.002584 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.002597 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:38Z","lastTransitionTime":"2025-12-02T10:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.106079 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.106126 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.106134 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.106150 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.106159 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:38Z","lastTransitionTime":"2025-12-02T10:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.209261 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.209333 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.209356 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.209385 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.209408 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:38Z","lastTransitionTime":"2025-12-02T10:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.312794 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.312843 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.312854 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.312868 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.312878 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:38Z","lastTransitionTime":"2025-12-02T10:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.415534 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.415581 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.415593 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.415609 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.415621 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:38Z","lastTransitionTime":"2025-12-02T10:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.517738 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.517775 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.517790 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.517808 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.517817 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:38Z","lastTransitionTime":"2025-12-02T10:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.620799 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.620842 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.620850 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.620865 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.620874 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:38Z","lastTransitionTime":"2025-12-02T10:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.723695 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.723751 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.723775 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.723812 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.723828 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:38Z","lastTransitionTime":"2025-12-02T10:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.827257 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.827308 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.827321 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.827340 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.827363 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:38Z","lastTransitionTime":"2025-12-02T10:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.929715 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.929758 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.929767 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.929783 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:38 crc kubenswrapper[4711]: I1202 10:14:38.929794 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:38Z","lastTransitionTime":"2025-12-02T10:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.032013 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.032080 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.032089 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.032104 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.032114 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:39Z","lastTransitionTime":"2025-12-02T10:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.077935 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.077992 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.078004 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.078098 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:39 crc kubenswrapper[4711]: E1202 10:14:39.078154 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:39 crc kubenswrapper[4711]: E1202 10:14:39.078285 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:39 crc kubenswrapper[4711]: E1202 10:14:39.078436 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:39 crc kubenswrapper[4711]: E1202 10:14:39.078544 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.134236 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.134290 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.134303 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.134321 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.134333 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:39Z","lastTransitionTime":"2025-12-02T10:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.236448 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.236501 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.236512 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.236529 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.236540 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:39Z","lastTransitionTime":"2025-12-02T10:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.340070 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.340114 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.340126 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.340144 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.340157 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:39Z","lastTransitionTime":"2025-12-02T10:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.442440 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.442494 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.442511 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.442534 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.442549 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:39Z","lastTransitionTime":"2025-12-02T10:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.544894 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.545046 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.545061 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.545079 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.545092 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:39Z","lastTransitionTime":"2025-12-02T10:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.647507 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.647550 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.647562 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.647581 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.647594 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:39Z","lastTransitionTime":"2025-12-02T10:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.750618 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.750744 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.750896 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.751219 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.751548 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:39Z","lastTransitionTime":"2025-12-02T10:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.854697 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.854740 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.854751 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.854766 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.854776 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:39Z","lastTransitionTime":"2025-12-02T10:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.957168 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.957219 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.957234 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.957263 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:39 crc kubenswrapper[4711]: I1202 10:14:39.957281 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:39Z","lastTransitionTime":"2025-12-02T10:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.060037 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.060099 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.060123 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.060152 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.060174 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:40Z","lastTransitionTime":"2025-12-02T10:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.162330 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.162360 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.162369 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.162381 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.162389 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:40Z","lastTransitionTime":"2025-12-02T10:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.264324 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.264372 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.264384 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.264401 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.264412 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:40Z","lastTransitionTime":"2025-12-02T10:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.367231 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.367272 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.367283 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.367298 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.367309 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:40Z","lastTransitionTime":"2025-12-02T10:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.470144 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.470202 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.470220 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.470245 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.470263 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:40Z","lastTransitionTime":"2025-12-02T10:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.572724 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.572777 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.572788 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.572808 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.572821 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:40Z","lastTransitionTime":"2025-12-02T10:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.674623 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.674669 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.674682 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.674699 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.674710 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:40Z","lastTransitionTime":"2025-12-02T10:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.776648 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.776693 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.776708 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.776726 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.776739 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:40Z","lastTransitionTime":"2025-12-02T10:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.879760 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.879803 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.879828 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.879852 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.879868 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:40Z","lastTransitionTime":"2025-12-02T10:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.981982 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.982022 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.982035 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.982051 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:40 crc kubenswrapper[4711]: I1202 10:14:40.982061 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:40Z","lastTransitionTime":"2025-12-02T10:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.078190 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.078263 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.078200 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.078237 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:41 crc kubenswrapper[4711]: E1202 10:14:41.078360 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:41 crc kubenswrapper[4711]: E1202 10:14:41.078442 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:41 crc kubenswrapper[4711]: E1202 10:14:41.078584 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:41 crc kubenswrapper[4711]: E1202 10:14:41.078690 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.083704 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.083740 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.083751 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.083765 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.083775 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.094002 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.108044 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.121787 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.135864 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.148245 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.164216 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.177415 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.185658 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.185703 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.185717 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.185735 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.185752 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.192942 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.209756 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.229215 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:22Z\\\",\\\"message\\\":\\\":14:21.909168 6638 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909177 6638 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909186 6638 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh in node crc\\\\nI1202 10:14:21.909191 6638 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh after 0 failed attempt(s)\\\\nF1202 10:14:21.909193 6638 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z]\\\\nI1202 10:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.242370 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.255308 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.269039 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.286013 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.288200 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.288364 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.288384 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.288412 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.288432 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.298554 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.310786 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.322320 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.390998 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.391119 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.391135 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.391161 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.391181 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.493049 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.493081 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.493089 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.493103 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.493112 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.594994 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.595034 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.595046 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.595062 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.595073 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.697394 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.697450 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.697464 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.697483 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.697498 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.764207 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.764247 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.764259 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.764276 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.764290 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: E1202 10:14:41.778466 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.783667 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.783688 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.783696 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.783709 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.783719 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: E1202 10:14:41.796689 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.800128 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.800196 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.800210 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.800224 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.800234 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: E1202 10:14:41.811269 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.814683 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.814708 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.814718 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.814730 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.814738 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: E1202 10:14:41.833167 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.836635 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.836690 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.836703 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.836718 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.836729 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: E1202 10:14:41.851433 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:41 crc kubenswrapper[4711]: E1202 10:14:41.851547 4711 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.853450 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.853478 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.853487 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.853501 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.853510 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.956604 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.956651 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.956663 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.956682 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:41 crc kubenswrapper[4711]: I1202 10:14:41.956694 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:41Z","lastTransitionTime":"2025-12-02T10:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.059570 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.059816 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.059877 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.059938 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.060020 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:42Z","lastTransitionTime":"2025-12-02T10:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.162274 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.162323 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.162340 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.162367 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.162385 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:42Z","lastTransitionTime":"2025-12-02T10:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.264817 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.264861 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.264873 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.264891 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.264902 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:42Z","lastTransitionTime":"2025-12-02T10:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.367117 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.367180 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.367255 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.367281 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.367300 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:42Z","lastTransitionTime":"2025-12-02T10:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.469392 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.469637 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.469703 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.469789 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.469860 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:42Z","lastTransitionTime":"2025-12-02T10:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.572654 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.572701 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.572712 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.572726 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.572735 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:42Z","lastTransitionTime":"2025-12-02T10:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.602141 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4qrj7_2fab88a2-3875-44a4-a926-7c76836b51b8/kube-multus/0.log" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.602862 4711 generic.go:334] "Generic (PLEG): container finished" podID="2fab88a2-3875-44a4-a926-7c76836b51b8" containerID="04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7" exitCode=1 Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.602968 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4qrj7" event={"ID":"2fab88a2-3875-44a4-a926-7c76836b51b8","Type":"ContainerDied","Data":"04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7"} Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.603482 4711 scope.go:117] "RemoveContainer" containerID="04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.614893 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.626030 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.646000 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.662942 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.675626 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.675798 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.676024 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.676203 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.676324 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:42Z","lastTransitionTime":"2025-12-02T10:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.678044 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.692866 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.704172 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.717417 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:42Z\\\",\\\"message\\\":\\\"2025-12-02T10:13:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc\\\\n2025-12-02T10:13:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc to /host/opt/cni/bin/\\\\n2025-12-02T10:13:57Z [verbose] multus-daemon started\\\\n2025-12-02T10:13:57Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:14:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.734347 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:22Z\\\",\\\"message\\\":\\\":14:21.909168 6638 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909177 6638 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909186 6638 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh in node crc\\\\nI1202 10:14:21.909191 6638 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh after 0 failed attempt(s)\\\\nF1202 10:14:21.909193 6638 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z]\\\\nI1202 10:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.744839 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.757738 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.772834 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.778081 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.778115 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.778125 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.778143 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.778153 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:42Z","lastTransitionTime":"2025-12-02T10:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.787509 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.799388 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.814206 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.822831 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.833805 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:42Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.880601 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.880830 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.880841 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.880856 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.880866 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:42Z","lastTransitionTime":"2025-12-02T10:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.985763 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.985797 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.985807 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.985822 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:42 crc kubenswrapper[4711]: I1202 10:14:42.985833 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:42Z","lastTransitionTime":"2025-12-02T10:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.078416 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.078465 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.078498 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:43 crc kubenswrapper[4711]: E1202 10:14:43.078590 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.078616 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:43 crc kubenswrapper[4711]: E1202 10:14:43.078754 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:43 crc kubenswrapper[4711]: E1202 10:14:43.078832 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:43 crc kubenswrapper[4711]: E1202 10:14:43.079127 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.087374 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.087411 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.087420 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.087435 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.087445 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:43Z","lastTransitionTime":"2025-12-02T10:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.189411 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.189453 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.189468 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.189488 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.189499 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:43Z","lastTransitionTime":"2025-12-02T10:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.292613 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.292655 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.292671 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.292688 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.292700 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:43Z","lastTransitionTime":"2025-12-02T10:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.395097 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.395136 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.395149 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.395165 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.395178 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:43Z","lastTransitionTime":"2025-12-02T10:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.497619 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.497665 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.497676 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.497693 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.497703 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:43Z","lastTransitionTime":"2025-12-02T10:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.599476 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.599519 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.599528 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.599543 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.599552 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:43Z","lastTransitionTime":"2025-12-02T10:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.608015 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4qrj7_2fab88a2-3875-44a4-a926-7c76836b51b8/kube-multus/0.log" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.608074 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4qrj7" event={"ID":"2fab88a2-3875-44a4-a926-7c76836b51b8","Type":"ContainerStarted","Data":"783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6"} Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.626460 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.640404 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.661536 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.674640 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.689145 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.701563 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.702311 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.702345 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.702357 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.702372 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.702383 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:43Z","lastTransitionTime":"2025-12-02T10:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.716199 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.726284 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.734420 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.746870 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.756900 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.767765 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.779856 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.791224 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.805018 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:42Z\\\",\\\"message\\\":\\\"2025-12-02T10:13:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc\\\\n2025-12-02T10:13:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc to /host/opt/cni/bin/\\\\n2025-12-02T10:13:57Z [verbose] multus-daemon started\\\\n2025-12-02T10:13:57Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:14:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.805226 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.805245 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.805253 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.805266 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.805274 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:43Z","lastTransitionTime":"2025-12-02T10:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.821334 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:22Z\\\",\\\"message\\\":\\\":14:21.909168 6638 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909177 6638 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909186 6638 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh in node crc\\\\nI1202 10:14:21.909191 6638 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh after 0 failed attempt(s)\\\\nF1202 10:14:21.909193 6638 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z]\\\\nI1202 10:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.829906 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:43Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.907418 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.907444 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.907452 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.907465 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:43 crc kubenswrapper[4711]: I1202 10:14:43.907475 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:43Z","lastTransitionTime":"2025-12-02T10:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.009639 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.009692 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.009706 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.009726 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.009740 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:44Z","lastTransitionTime":"2025-12-02T10:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.111445 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.111481 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.111490 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.111547 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.111559 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:44Z","lastTransitionTime":"2025-12-02T10:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.213703 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.213740 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.213748 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.213760 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.213768 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:44Z","lastTransitionTime":"2025-12-02T10:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.316216 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.316265 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.316276 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.316295 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.316309 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:44Z","lastTransitionTime":"2025-12-02T10:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.418182 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.418228 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.418239 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.418260 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.418271 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:44Z","lastTransitionTime":"2025-12-02T10:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.520865 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.520925 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.520940 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.520986 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.521007 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:44Z","lastTransitionTime":"2025-12-02T10:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.623621 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.623657 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.623668 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.623685 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.623699 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:44Z","lastTransitionTime":"2025-12-02T10:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.726046 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.726304 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.726435 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.726546 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.726647 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:44Z","lastTransitionTime":"2025-12-02T10:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.829410 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.829458 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.829471 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.829488 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.829501 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:44Z","lastTransitionTime":"2025-12-02T10:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.933390 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.933453 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.933474 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.933501 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:44 crc kubenswrapper[4711]: I1202 10:14:44.933525 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:44Z","lastTransitionTime":"2025-12-02T10:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.036752 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.036801 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.036813 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.036831 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.036845 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:45Z","lastTransitionTime":"2025-12-02T10:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.077906 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.078208 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.078343 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:45 crc kubenswrapper[4711]: E1202 10:14:45.078335 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:45 crc kubenswrapper[4711]: E1202 10:14:45.078552 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.078676 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:45 crc kubenswrapper[4711]: E1202 10:14:45.078814 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:45 crc kubenswrapper[4711]: E1202 10:14:45.079017 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.095189 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.140069 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.140379 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.140460 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.140554 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.140645 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:45Z","lastTransitionTime":"2025-12-02T10:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.243424 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.243487 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.243507 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.243534 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.243553 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:45Z","lastTransitionTime":"2025-12-02T10:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.346115 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.346163 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.346177 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.346198 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.346212 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:45Z","lastTransitionTime":"2025-12-02T10:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.450610 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.451861 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.451973 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.452056 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.452126 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:45Z","lastTransitionTime":"2025-12-02T10:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.555407 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.555497 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.555519 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.555553 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.555576 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:45Z","lastTransitionTime":"2025-12-02T10:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.657708 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.657741 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.657750 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.657765 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.657774 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:45Z","lastTransitionTime":"2025-12-02T10:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.761514 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.761593 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.761630 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.761660 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.761686 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:45Z","lastTransitionTime":"2025-12-02T10:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.864366 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.864395 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.864403 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.864416 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.864425 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:45Z","lastTransitionTime":"2025-12-02T10:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.966840 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.966883 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.966894 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.966911 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:45 crc kubenswrapper[4711]: I1202 10:14:45.966922 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:45Z","lastTransitionTime":"2025-12-02T10:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.069401 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.069448 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.069459 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.069476 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.069488 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:46Z","lastTransitionTime":"2025-12-02T10:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.173580 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.173641 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.173662 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.173693 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.173714 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:46Z","lastTransitionTime":"2025-12-02T10:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.276869 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.276933 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.277006 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.277047 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.277065 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:46Z","lastTransitionTime":"2025-12-02T10:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.381749 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.382084 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.382163 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.382270 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.382370 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:46Z","lastTransitionTime":"2025-12-02T10:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.485254 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.485292 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.485302 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.485316 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.485327 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:46Z","lastTransitionTime":"2025-12-02T10:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.588461 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.588525 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.588534 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.588566 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.588584 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:46Z","lastTransitionTime":"2025-12-02T10:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.690645 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.690715 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.690727 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.690754 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.690763 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:46Z","lastTransitionTime":"2025-12-02T10:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.793901 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.793970 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.793984 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.794004 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.794016 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:46Z","lastTransitionTime":"2025-12-02T10:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.897305 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.897376 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.897394 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.897413 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:46 crc kubenswrapper[4711]: I1202 10:14:46.897424 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:46Z","lastTransitionTime":"2025-12-02T10:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.000015 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.000056 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.000065 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.000079 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.000089 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:47Z","lastTransitionTime":"2025-12-02T10:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.078361 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.078359 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.078518 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.078577 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:47 crc kubenswrapper[4711]: E1202 10:14:47.078725 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:47 crc kubenswrapper[4711]: E1202 10:14:47.078880 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:47 crc kubenswrapper[4711]: E1202 10:14:47.079097 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:47 crc kubenswrapper[4711]: E1202 10:14:47.079187 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.102867 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.102907 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.102918 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.102936 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.102966 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:47Z","lastTransitionTime":"2025-12-02T10:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.205933 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.206005 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.206018 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.206038 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.206050 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:47Z","lastTransitionTime":"2025-12-02T10:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.309334 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.309397 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.309421 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.309446 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.309481 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:47Z","lastTransitionTime":"2025-12-02T10:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.412585 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.412641 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.412654 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.412674 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.412691 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:47Z","lastTransitionTime":"2025-12-02T10:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.514337 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.514370 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.514379 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.514392 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.514400 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:47Z","lastTransitionTime":"2025-12-02T10:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.617384 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.617428 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.617437 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.617452 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.617463 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:47Z","lastTransitionTime":"2025-12-02T10:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.720697 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.720759 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.720769 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.720783 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.720791 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:47Z","lastTransitionTime":"2025-12-02T10:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.824037 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.824095 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.824107 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.824128 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.824138 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:47Z","lastTransitionTime":"2025-12-02T10:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.926645 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.926718 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.926728 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.926758 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:47 crc kubenswrapper[4711]: I1202 10:14:47.926767 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:47Z","lastTransitionTime":"2025-12-02T10:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.030344 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.030416 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.030440 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.030469 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.030491 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:48Z","lastTransitionTime":"2025-12-02T10:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.078787 4711 scope.go:117] "RemoveContainer" containerID="0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.133710 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.133753 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.133763 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.133777 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.133786 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:48Z","lastTransitionTime":"2025-12-02T10:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.237384 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.237477 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.237499 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.237529 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.237557 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:48Z","lastTransitionTime":"2025-12-02T10:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.339944 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.340006 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.340020 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.340036 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.340048 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:48Z","lastTransitionTime":"2025-12-02T10:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.442822 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.442869 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.442882 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.442902 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.442914 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:48Z","lastTransitionTime":"2025-12-02T10:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.545362 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.545401 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.545411 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.545426 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.545437 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:48Z","lastTransitionTime":"2025-12-02T10:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.627152 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/2.log" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.629463 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726"} Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.629854 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.643137 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.647407 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.647434 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.647442 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.647454 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.647462 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:48Z","lastTransitionTime":"2025-12-02T10:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.660146 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.671935 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.704009 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.721528 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.732324 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.741706 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.749639 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.749706 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.749721 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.749737 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.749748 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:48Z","lastTransitionTime":"2025-12-02T10:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.751422 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3185c5f0-a2f0-4322-983e-9bfa09bd083a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://283095cee32f35a91fa0b5cf45589e99f36719c211a1d5890567377b23f2b33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.773075 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:22Z\\\",\\\"message\\\":\\\":14:21.909168 6638 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909177 6638 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909186 6638 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh in node crc\\\\nI1202 10:14:21.909191 6638 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh after 0 failed attempt(s)\\\\nF1202 10:14:21.909193 6638 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z]\\\\nI1202 10:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.786382 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.797104 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.813844 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:42Z\\\",\\\"message\\\":\\\"2025-12-02T10:13:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc\\\\n2025-12-02T10:13:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc to /host/opt/cni/bin/\\\\n2025-12-02T10:13:57Z [verbose] multus-daemon started\\\\n2025-12-02T10:13:57Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:14:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.831309 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.843606 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.852565 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.852602 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.852612 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.852627 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.852639 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:48Z","lastTransitionTime":"2025-12-02T10:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.858659 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.869674 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.894090 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.904826 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:48Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.955180 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.955224 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.955237 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.955258 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:48 crc kubenswrapper[4711]: I1202 10:14:48.955271 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:48Z","lastTransitionTime":"2025-12-02T10:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.057870 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.057917 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.057927 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.057940 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.057990 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:49Z","lastTransitionTime":"2025-12-02T10:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.077391 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.077461 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:49 crc kubenswrapper[4711]: E1202 10:14:49.077551 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.077595 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.077602 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:49 crc kubenswrapper[4711]: E1202 10:14:49.077694 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:49 crc kubenswrapper[4711]: E1202 10:14:49.077779 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:49 crc kubenswrapper[4711]: E1202 10:14:49.077875 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.160409 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.160465 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.160478 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.160497 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.160509 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:49Z","lastTransitionTime":"2025-12-02T10:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.263201 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.263251 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.263262 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.263278 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.263289 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:49Z","lastTransitionTime":"2025-12-02T10:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.365190 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.365232 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.365240 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.365257 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.365267 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:49Z","lastTransitionTime":"2025-12-02T10:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.468830 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.468875 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.468885 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.468904 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.468920 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:49Z","lastTransitionTime":"2025-12-02T10:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.571737 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.571783 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.571795 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.571815 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.571826 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:49Z","lastTransitionTime":"2025-12-02T10:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.637654 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/3.log" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.638431 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/2.log" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.642510 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726" exitCode=1 Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.642573 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726"} Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.642773 4711 scope.go:117] "RemoveContainer" containerID="0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.643310 4711 scope.go:117] "RemoveContainer" containerID="24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726" Dec 02 10:14:49 crc kubenswrapper[4711]: E1202 10:14:49.643469 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.661588 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.674705 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.674981 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.675081 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.675155 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.674968 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.675226 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:49Z","lastTransitionTime":"2025-12-02T10:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.687040 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.697173 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.707589 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3185c5f0-a2f0-4322-983e-9bfa09bd083a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://283095cee32f35a91fa0b5cf45589e99f36719c211a1d5890567377b23f2b33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.720473 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.733175 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.743909 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.754993 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.767730 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:42Z\\\",\\\"message\\\":\\\"2025-12-02T10:13:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc\\\\n2025-12-02T10:13:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc to /host/opt/cni/bin/\\\\n2025-12-02T10:13:57Z [verbose] multus-daemon started\\\\n2025-12-02T10:13:57Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:14:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.778599 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.778704 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.778725 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.778776 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.778790 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:49Z","lastTransitionTime":"2025-12-02T10:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.787231 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0267c103195345e4eb1c8bc7aa76e6e27a26aa3f3d83d68a0217ea815bd24f82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:22Z\\\",\\\"message\\\":\\\":14:21.909168 6638 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909177 6638 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh\\\\nI1202 10:14:21.909186 6638 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh in node crc\\\\nI1202 10:14:21.909191 6638 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-n6sdh after 0 failed attempt(s)\\\\nF1202 10:14:21.909193 6638 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:21Z is after 2025-08-24T17:21:41Z]\\\\nI1202 10:14:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:49Z\\\",\\\"message\\\":\\\"t:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128010 6989 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128062 6989 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128104 6989 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.798685 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.814480 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.828582 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.844712 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.858507 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.872990 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.881204 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.881255 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.881272 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.881294 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.881306 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:49Z","lastTransitionTime":"2025-12-02T10:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.886818 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:49Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.983589 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.983648 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.983665 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.983688 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:49 crc kubenswrapper[4711]: I1202 10:14:49.983706 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:49Z","lastTransitionTime":"2025-12-02T10:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.086255 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.086295 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.086303 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.086316 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.086339 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:50Z","lastTransitionTime":"2025-12-02T10:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.188782 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.188830 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.188840 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.188856 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.188866 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:50Z","lastTransitionTime":"2025-12-02T10:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.292660 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.292719 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.292727 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.292743 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.292753 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:50Z","lastTransitionTime":"2025-12-02T10:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.395467 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.395521 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.395562 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.395582 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.395594 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:50Z","lastTransitionTime":"2025-12-02T10:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.498720 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.498765 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.498775 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.498792 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.498804 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:50Z","lastTransitionTime":"2025-12-02T10:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.601462 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.601551 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.601574 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.601603 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.601624 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:50Z","lastTransitionTime":"2025-12-02T10:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.649756 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/3.log" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.661315 4711 scope.go:117] "RemoveContainer" containerID="24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726" Dec 02 10:14:50 crc kubenswrapper[4711]: E1202 10:14:50.661457 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.682033 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.696732 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.704497 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.704549 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.704561 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.704579 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.704591 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:50Z","lastTransitionTime":"2025-12-02T10:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.713178 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.730585 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.747197 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.761624 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.777350 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.791805 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.804455 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3185c5f0-a2f0-4322-983e-9bfa09bd083a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://283095cee32f35a91fa0b5cf45589e99f36719c211a1d5890567377b23f2b33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.806920 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.806944 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.806980 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.806999 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.807008 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:50Z","lastTransitionTime":"2025-12-02T10:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.826666 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.838939 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.851003 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.863062 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:42Z\\\",\\\"message\\\":\\\"2025-12-02T10:13:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc\\\\n2025-12-02T10:13:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc to /host/opt/cni/bin/\\\\n2025-12-02T10:13:57Z [verbose] multus-daemon started\\\\n2025-12-02T10:13:57Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:14:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.886824 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:49Z\\\",\\\"message\\\":\\\"t:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128010 6989 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128062 6989 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128104 6989 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.899412 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.909406 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.909457 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.909467 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.909481 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.909490 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:50Z","lastTransitionTime":"2025-12-02T10:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.912245 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.926679 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:50 crc kubenswrapper[4711]: I1202 10:14:50.946536 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.011942 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.012008 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.012020 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.012037 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.012048 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:51Z","lastTransitionTime":"2025-12-02T10:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.078412 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.078495 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.078504 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.078489 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:51 crc kubenswrapper[4711]: E1202 10:14:51.078697 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:51 crc kubenswrapper[4711]: E1202 10:14:51.079228 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:51 crc kubenswrapper[4711]: E1202 10:14:51.079609 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:51 crc kubenswrapper[4711]: E1202 10:14:51.080024 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.099552 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.112546 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.113941 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.113995 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.114004 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.114019 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.114029 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:51Z","lastTransitionTime":"2025-12-02T10:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.123740 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.135730 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3185c5f0-a2f0-4322-983e-9bfa09bd083a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://283095cee32f35a91fa0b5cf45589e99f36719c211a1d5890567377b23f2b33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.149667 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.163320 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.182159 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.193017 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.206350 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.215674 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.215722 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.215733 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.215750 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.215763 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:51Z","lastTransitionTime":"2025-12-02T10:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.220981 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:42Z\\\",\\\"message\\\":\\\"2025-12-02T10:13:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc\\\\n2025-12-02T10:13:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc to /host/opt/cni/bin/\\\\n2025-12-02T10:13:57Z [verbose] multus-daemon started\\\\n2025-12-02T10:13:57Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:14:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.249113 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:49Z\\\",\\\"message\\\":\\\"t:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128010 6989 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128062 6989 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128104 6989 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.265888 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.278408 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.292474 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.314738 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.319021 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.319106 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.319119 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.319190 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.319204 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:51Z","lastTransitionTime":"2025-12-02T10:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.332869 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.347829 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.360310 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:51Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.422169 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.422233 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.422243 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.422313 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.422331 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:51Z","lastTransitionTime":"2025-12-02T10:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.524583 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.524625 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.524635 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.524652 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.524662 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:51Z","lastTransitionTime":"2025-12-02T10:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.627296 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.627656 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.627807 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.628020 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.628169 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:51Z","lastTransitionTime":"2025-12-02T10:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.731869 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.731909 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.731919 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.731932 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.731940 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:51Z","lastTransitionTime":"2025-12-02T10:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.835230 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.835285 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.835295 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.835311 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.835321 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:51Z","lastTransitionTime":"2025-12-02T10:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.938459 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.938546 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.938566 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.938590 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:51 crc kubenswrapper[4711]: I1202 10:14:51.938607 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:51Z","lastTransitionTime":"2025-12-02T10:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.042715 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.042796 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.042815 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.042843 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.042863 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.146299 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.146367 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.146379 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.146395 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.146409 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.161093 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.161139 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.161151 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.161168 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.161181 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: E1202 10:14:52.182303 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.187238 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.187538 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.187742 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.187937 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.188179 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: E1202 10:14:52.209562 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.216124 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.216186 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.216201 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.216224 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.216238 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: E1202 10:14:52.236543 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.242681 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.242770 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.242793 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.242823 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.242847 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: E1202 10:14:52.265786 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.271696 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.271790 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.271807 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.271841 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.271858 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: E1202 10:14:52.285548 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:14:52Z is after 2025-08-24T17:21:41Z" Dec 02 10:14:52 crc kubenswrapper[4711]: E1202 10:14:52.285751 4711 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.287642 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.287692 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.287723 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.287747 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.287776 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.390890 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.391165 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.391178 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.391195 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.391209 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.494170 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.494222 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.494231 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.494248 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.494259 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.597533 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.597614 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.597625 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.597646 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.597659 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.700479 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.700601 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.700616 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.700634 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.700645 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.803414 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.803478 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.803491 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.803516 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.804167 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.907798 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.907921 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.907977 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.908014 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:52 crc kubenswrapper[4711]: I1202 10:14:52.908034 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:52Z","lastTransitionTime":"2025-12-02T10:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.011875 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.011944 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.012003 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.012034 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.012057 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:53Z","lastTransitionTime":"2025-12-02T10:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.078105 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.078216 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.078232 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.078278 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:53 crc kubenswrapper[4711]: E1202 10:14:53.078410 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:53 crc kubenswrapper[4711]: E1202 10:14:53.078568 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:53 crc kubenswrapper[4711]: E1202 10:14:53.078672 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:53 crc kubenswrapper[4711]: E1202 10:14:53.078774 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.115282 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.115345 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.115369 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.115403 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.115429 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:53Z","lastTransitionTime":"2025-12-02T10:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.217806 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.217837 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.217845 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.217859 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.217868 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:53Z","lastTransitionTime":"2025-12-02T10:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.319729 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.319757 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.319767 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.319782 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.319797 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:53Z","lastTransitionTime":"2025-12-02T10:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.422045 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.422075 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.422083 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.422096 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.422106 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:53Z","lastTransitionTime":"2025-12-02T10:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.524911 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.525036 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.525060 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.525088 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.525109 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:53Z","lastTransitionTime":"2025-12-02T10:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.628384 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.628442 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.628457 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.628479 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.628495 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:53Z","lastTransitionTime":"2025-12-02T10:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.731062 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.731128 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.731141 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.731158 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.731188 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:53Z","lastTransitionTime":"2025-12-02T10:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.834087 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.834487 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.834588 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.834671 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.834742 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:53Z","lastTransitionTime":"2025-12-02T10:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.938328 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.939222 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.939255 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.939278 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:53 crc kubenswrapper[4711]: I1202 10:14:53.939292 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:53Z","lastTransitionTime":"2025-12-02T10:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.043198 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.043253 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.043268 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.043290 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.043302 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:54Z","lastTransitionTime":"2025-12-02T10:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.145785 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.145828 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.145839 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.145857 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.145871 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:54Z","lastTransitionTime":"2025-12-02T10:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.249075 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.249154 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.249184 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.249218 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.249241 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:54Z","lastTransitionTime":"2025-12-02T10:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.352170 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.352502 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.352622 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.352727 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.352814 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:54Z","lastTransitionTime":"2025-12-02T10:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.455555 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.455620 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.455641 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.455668 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.455685 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:54Z","lastTransitionTime":"2025-12-02T10:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.558066 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.558112 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.558122 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.558137 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.558147 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:54Z","lastTransitionTime":"2025-12-02T10:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.660740 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.660785 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.660797 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.660813 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.660822 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:54Z","lastTransitionTime":"2025-12-02T10:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.764331 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.764382 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.764401 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.764428 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.764446 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:54Z","lastTransitionTime":"2025-12-02T10:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.866735 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.867047 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.867134 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.867201 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.867269 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:54Z","lastTransitionTime":"2025-12-02T10:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.927514 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.927649 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.927674 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:54 crc kubenswrapper[4711]: E1202 10:14:54.927793 4711 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:14:54 crc kubenswrapper[4711]: E1202 10:14:54.927878 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.927847459 +0000 UTC m=+148.637213906 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:14:54 crc kubenswrapper[4711]: E1202 10:14:54.928087 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.928079675 +0000 UTC m=+148.637446122 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:14:54 crc kubenswrapper[4711]: E1202 10:14:54.928272 4711 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:14:54 crc kubenswrapper[4711]: E1202 10:14:54.928484 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.928457525 +0000 UTC m=+148.637823982 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.969175 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.969211 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.969219 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.969240 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:54 crc kubenswrapper[4711]: I1202 10:14:54.969250 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:54Z","lastTransitionTime":"2025-12-02T10:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.071817 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.072117 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.072203 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.072280 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.072342 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:55Z","lastTransitionTime":"2025-12-02T10:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.080208 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.080321 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.080316 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.080262 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:55 crc kubenswrapper[4711]: E1202 10:14:55.080536 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:55 crc kubenswrapper[4711]: E1202 10:14:55.080719 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:55 crc kubenswrapper[4711]: E1202 10:14:55.080922 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:55 crc kubenswrapper[4711]: E1202 10:14:55.081082 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.130437 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.130517 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:55 crc kubenswrapper[4711]: E1202 10:14:55.130620 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:14:55 crc kubenswrapper[4711]: E1202 10:14:55.130647 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:14:55 crc kubenswrapper[4711]: E1202 10:14:55.130669 4711 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:14:55 crc kubenswrapper[4711]: E1202 10:14:55.130722 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.130702547 +0000 UTC m=+148.840069024 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:14:55 crc kubenswrapper[4711]: E1202 10:14:55.130620 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:14:55 crc kubenswrapper[4711]: E1202 10:14:55.130755 4711 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:14:55 crc kubenswrapper[4711]: E1202 10:14:55.130768 4711 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:14:55 crc kubenswrapper[4711]: E1202 10:14:55.130804 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.130792601 +0000 UTC m=+148.840159058 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.175599 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.175677 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.175702 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.175737 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.175760 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:55Z","lastTransitionTime":"2025-12-02T10:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.282158 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.282204 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.282219 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.282237 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.282248 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:55Z","lastTransitionTime":"2025-12-02T10:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.384862 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.384917 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.384931 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.384975 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.384990 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:55Z","lastTransitionTime":"2025-12-02T10:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.487730 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.487767 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.487777 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.487791 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.487800 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:55Z","lastTransitionTime":"2025-12-02T10:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.590540 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.590580 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.590590 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.590603 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.590612 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:55Z","lastTransitionTime":"2025-12-02T10:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.693799 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.693899 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.693920 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.693943 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.694028 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:55Z","lastTransitionTime":"2025-12-02T10:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.796838 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.796883 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.796895 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.796914 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.796928 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:55Z","lastTransitionTime":"2025-12-02T10:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.899587 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.899655 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.899681 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.899711 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:55 crc kubenswrapper[4711]: I1202 10:14:55.899731 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:55Z","lastTransitionTime":"2025-12-02T10:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.003480 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.003550 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.003573 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.003605 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.003624 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:56Z","lastTransitionTime":"2025-12-02T10:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.108077 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.108138 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.108155 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.108179 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.108196 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:56Z","lastTransitionTime":"2025-12-02T10:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.211201 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.211267 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.211281 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.211302 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.211336 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:56Z","lastTransitionTime":"2025-12-02T10:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.314314 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.314376 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.314387 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.314405 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.314418 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:56Z","lastTransitionTime":"2025-12-02T10:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.417448 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.417500 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.417509 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.417524 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.417533 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:56Z","lastTransitionTime":"2025-12-02T10:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.520364 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.520412 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.520422 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.520439 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.520451 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:56Z","lastTransitionTime":"2025-12-02T10:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.623394 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.623447 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.623459 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.623474 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.623486 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:56Z","lastTransitionTime":"2025-12-02T10:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.725736 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.725783 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.725791 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.725805 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.725816 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:56Z","lastTransitionTime":"2025-12-02T10:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.828598 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.828639 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.828649 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.828665 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.828676 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:56Z","lastTransitionTime":"2025-12-02T10:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.931372 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.931423 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.931435 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.931453 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:56 crc kubenswrapper[4711]: I1202 10:14:56.931466 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:56Z","lastTransitionTime":"2025-12-02T10:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.033552 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.033590 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.033601 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.033619 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.033633 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:57Z","lastTransitionTime":"2025-12-02T10:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.078522 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.078560 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.078584 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.078655 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:57 crc kubenswrapper[4711]: E1202 10:14:57.078883 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:57 crc kubenswrapper[4711]: E1202 10:14:57.079106 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:57 crc kubenswrapper[4711]: E1202 10:14:57.079394 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:57 crc kubenswrapper[4711]: E1202 10:14:57.079599 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.136353 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.136402 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.136417 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.136439 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.136454 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:57Z","lastTransitionTime":"2025-12-02T10:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.238744 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.238793 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.238804 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.238820 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.238832 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:57Z","lastTransitionTime":"2025-12-02T10:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.341229 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.341285 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.341301 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.341323 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.341339 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:57Z","lastTransitionTime":"2025-12-02T10:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.444456 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.444514 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.444527 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.444545 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.444556 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:57Z","lastTransitionTime":"2025-12-02T10:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.547810 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.547861 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.547875 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.547900 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.547916 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:57Z","lastTransitionTime":"2025-12-02T10:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.650603 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.650652 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.650660 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.650676 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.650685 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:57Z","lastTransitionTime":"2025-12-02T10:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.752928 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.753010 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.753027 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.753049 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.753064 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:57Z","lastTransitionTime":"2025-12-02T10:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.855469 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.855507 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.855519 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.855534 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.855546 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:57Z","lastTransitionTime":"2025-12-02T10:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.958506 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.958578 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.958602 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.958632 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:57 crc kubenswrapper[4711]: I1202 10:14:57.958656 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:57Z","lastTransitionTime":"2025-12-02T10:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.061560 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.061603 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.061611 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.061625 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.061633 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:58Z","lastTransitionTime":"2025-12-02T10:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.168510 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.168559 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.168572 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.168589 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.168602 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:58Z","lastTransitionTime":"2025-12-02T10:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.271211 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.271309 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.271332 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.271359 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.271375 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:58Z","lastTransitionTime":"2025-12-02T10:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.374253 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.374331 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.374340 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.374357 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.374366 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:58Z","lastTransitionTime":"2025-12-02T10:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.478033 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.478100 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.478118 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.478143 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.478164 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:58Z","lastTransitionTime":"2025-12-02T10:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.580683 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.580742 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.580757 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.580778 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.580795 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:58Z","lastTransitionTime":"2025-12-02T10:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.685716 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.685851 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.685862 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.685876 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.685885 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:58Z","lastTransitionTime":"2025-12-02T10:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.788473 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.788528 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.788540 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.788561 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.788574 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:58Z","lastTransitionTime":"2025-12-02T10:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.891329 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.891402 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.891419 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.891447 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.891464 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:58Z","lastTransitionTime":"2025-12-02T10:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.993884 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.993975 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.993992 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.994013 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:58 crc kubenswrapper[4711]: I1202 10:14:58.994027 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:58Z","lastTransitionTime":"2025-12-02T10:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.078572 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.079184 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.080544 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.080710 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:14:59 crc kubenswrapper[4711]: E1202 10:14:59.080707 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:14:59 crc kubenswrapper[4711]: E1202 10:14:59.080847 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:14:59 crc kubenswrapper[4711]: E1202 10:14:59.080942 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:14:59 crc kubenswrapper[4711]: E1202 10:14:59.081121 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.096588 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.096648 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.096665 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.096692 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.096707 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:59Z","lastTransitionTime":"2025-12-02T10:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.199812 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.199867 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.199882 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.199898 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.199910 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:59Z","lastTransitionTime":"2025-12-02T10:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.301828 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.301873 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.301885 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.301905 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.301917 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:59Z","lastTransitionTime":"2025-12-02T10:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.404662 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.404698 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.404706 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.404720 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.404728 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:59Z","lastTransitionTime":"2025-12-02T10:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.506939 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.507026 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.507038 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.507057 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.507071 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:59Z","lastTransitionTime":"2025-12-02T10:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.609693 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.609735 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.609748 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.609764 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.609775 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:59Z","lastTransitionTime":"2025-12-02T10:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.712854 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.712906 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.712934 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.712982 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.713001 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:59Z","lastTransitionTime":"2025-12-02T10:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.815113 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.815143 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.815151 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.815163 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.815171 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:59Z","lastTransitionTime":"2025-12-02T10:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.917938 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.918037 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.918052 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.918070 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:14:59 crc kubenswrapper[4711]: I1202 10:14:59.918082 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:14:59Z","lastTransitionTime":"2025-12-02T10:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.020793 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.020934 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.021009 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.021036 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.021098 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:00Z","lastTransitionTime":"2025-12-02T10:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.127931 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.127997 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.128006 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.128025 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.128035 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:00Z","lastTransitionTime":"2025-12-02T10:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.230265 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.230296 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.230305 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.230320 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.230328 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:00Z","lastTransitionTime":"2025-12-02T10:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.332866 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.332909 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.332920 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.332936 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.332959 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:00Z","lastTransitionTime":"2025-12-02T10:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.435414 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.435458 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.435474 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.435493 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.435510 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:00Z","lastTransitionTime":"2025-12-02T10:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.538615 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.538709 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.538728 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.538787 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.538808 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:00Z","lastTransitionTime":"2025-12-02T10:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.642405 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.642472 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.642482 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.642524 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.642535 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:00Z","lastTransitionTime":"2025-12-02T10:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.745427 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.745505 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.745529 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.745559 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.745581 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:00Z","lastTransitionTime":"2025-12-02T10:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.849380 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.849465 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.849480 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.849498 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.849518 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:00Z","lastTransitionTime":"2025-12-02T10:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.952112 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.952145 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.952154 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.952167 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:00 crc kubenswrapper[4711]: I1202 10:15:00.952176 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:00Z","lastTransitionTime":"2025-12-02T10:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.054076 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.054113 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.054124 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.054139 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.054149 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:01Z","lastTransitionTime":"2025-12-02T10:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.078968 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:01 crc kubenswrapper[4711]: E1202 10:15:01.079077 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.079132 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:01 crc kubenswrapper[4711]: E1202 10:15:01.079242 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.079760 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:01 crc kubenswrapper[4711]: E1202 10:15:01.079866 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.080218 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:01 crc kubenswrapper[4711]: E1202 10:15:01.080318 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.090241 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.091399 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.103625 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.114578 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.130880 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.142111 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.151992 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.156041 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.156076 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.156086 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.156100 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.156109 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:01Z","lastTransitionTime":"2025-12-02T10:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.163181 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3185c5f0-a2f0-4322-983e-9bfa09bd083a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://283095cee32f35a91fa0b5cf45589e99f36719c211a1d5890567377b23f2b33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.177053 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.195387 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.207179 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.221547 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:42Z\\\",\\\"message\\\":\\\"2025-12-02T10:13:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc\\\\n2025-12-02T10:13:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc to /host/opt/cni/bin/\\\\n2025-12-02T10:13:57Z [verbose] multus-daemon started\\\\n2025-12-02T10:13:57Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:14:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.241266 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:49Z\\\",\\\"message\\\":\\\"t:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128010 6989 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128062 6989 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128104 6989 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.255551 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.258582 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.258737 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.258833 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.258934 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.259053 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:01Z","lastTransitionTime":"2025-12-02T10:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.269221 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.282763 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.295855 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.307001 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.316806 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.361465 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.361536 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.361545 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.361558 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.361568 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:01Z","lastTransitionTime":"2025-12-02T10:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.463777 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.463827 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.463842 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.463861 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.463871 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:01Z","lastTransitionTime":"2025-12-02T10:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.566523 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.566573 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.566589 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.566606 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.566618 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:01Z","lastTransitionTime":"2025-12-02T10:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.669562 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.669613 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.669623 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.669639 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.669650 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:01Z","lastTransitionTime":"2025-12-02T10:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.772571 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.772620 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.772631 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.772646 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.772658 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:01Z","lastTransitionTime":"2025-12-02T10:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.874977 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.875018 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.875028 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.875042 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.875052 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:01Z","lastTransitionTime":"2025-12-02T10:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.977932 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.978026 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.978043 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.978065 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:01 crc kubenswrapper[4711]: I1202 10:15:01.978082 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:01Z","lastTransitionTime":"2025-12-02T10:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.080483 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.080529 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.080537 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.080555 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.080564 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.183253 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.183311 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.183327 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.183347 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.183361 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.286512 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.286568 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.286578 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.286594 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.286606 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.389221 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.389261 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.389273 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.389291 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.389305 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.492910 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.492972 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.492985 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.493001 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.493013 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.596425 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.596487 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.596512 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.596544 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.596567 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.678795 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.678833 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.678841 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.678855 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.678868 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: E1202 10:15:02.725152 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.734820 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.734870 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.734885 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.734905 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.734918 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: E1202 10:15:02.747529 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.751979 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.752022 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.752033 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.752050 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.752060 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: E1202 10:15:02.763613 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.767173 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.767207 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.767216 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.767230 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.767240 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: E1202 10:15:02.782250 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.785840 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.785899 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.785915 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.785933 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.785944 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: E1202 10:15:02.798006 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:02Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:02 crc kubenswrapper[4711]: E1202 10:15:02.798170 4711 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.799436 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.799473 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.799485 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.799501 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.799512 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.901571 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.901639 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.901656 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.901681 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:02 crc kubenswrapper[4711]: I1202 10:15:02.901696 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:02Z","lastTransitionTime":"2025-12-02T10:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.003917 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.003990 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.004011 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.004060 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.004069 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:03Z","lastTransitionTime":"2025-12-02T10:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.077900 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:03 crc kubenswrapper[4711]: E1202 10:15:03.078225 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.078300 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.078358 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.078367 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:03 crc kubenswrapper[4711]: E1202 10:15:03.078711 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:03 crc kubenswrapper[4711]: E1202 10:15:03.078904 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:03 crc kubenswrapper[4711]: E1202 10:15:03.079002 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.106976 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.107019 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.107027 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.107043 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.107072 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:03Z","lastTransitionTime":"2025-12-02T10:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.210096 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.210141 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.210149 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.210163 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.210173 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:03Z","lastTransitionTime":"2025-12-02T10:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.312508 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.312551 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.312562 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.312577 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.312587 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:03Z","lastTransitionTime":"2025-12-02T10:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.415086 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.415126 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.415137 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.415195 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.415208 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:03Z","lastTransitionTime":"2025-12-02T10:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.517809 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.517880 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.517903 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.517928 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.517994 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:03Z","lastTransitionTime":"2025-12-02T10:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.621002 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.621049 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.621069 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.621098 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.621113 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:03Z","lastTransitionTime":"2025-12-02T10:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.723837 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.723903 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.723922 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.723945 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.723987 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:03Z","lastTransitionTime":"2025-12-02T10:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.826762 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.826819 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.826831 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.826857 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.826876 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:03Z","lastTransitionTime":"2025-12-02T10:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.929685 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.929735 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.929745 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.929765 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:03 crc kubenswrapper[4711]: I1202 10:15:03.929775 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:03Z","lastTransitionTime":"2025-12-02T10:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.031712 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.031784 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.031798 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.031824 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.031836 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:04Z","lastTransitionTime":"2025-12-02T10:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.078858 4711 scope.go:117] "RemoveContainer" containerID="24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726" Dec 02 10:15:04 crc kubenswrapper[4711]: E1202 10:15:04.079115 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.133920 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.133981 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.133997 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.134018 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.134032 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:04Z","lastTransitionTime":"2025-12-02T10:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.237036 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.237109 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.237123 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.237148 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.237164 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:04Z","lastTransitionTime":"2025-12-02T10:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.340416 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.340461 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.340472 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.340489 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.340501 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:04Z","lastTransitionTime":"2025-12-02T10:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.444884 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.444966 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.444979 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.445001 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.445050 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:04Z","lastTransitionTime":"2025-12-02T10:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.548237 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.548298 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.548308 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.548331 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.548341 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:04Z","lastTransitionTime":"2025-12-02T10:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.651212 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.651360 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.651373 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.651393 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.651409 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:04Z","lastTransitionTime":"2025-12-02T10:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.754578 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.754647 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.754669 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.754696 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.754716 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:04Z","lastTransitionTime":"2025-12-02T10:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.858053 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.858146 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.858163 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.858192 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.858205 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:04Z","lastTransitionTime":"2025-12-02T10:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.962694 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.962750 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.962762 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.962783 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:04 crc kubenswrapper[4711]: I1202 10:15:04.962796 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:04Z","lastTransitionTime":"2025-12-02T10:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.065971 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.066040 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.066051 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.066068 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.066084 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:05Z","lastTransitionTime":"2025-12-02T10:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.078421 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.078430 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.078577 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.078808 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:05 crc kubenswrapper[4711]: E1202 10:15:05.078804 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:05 crc kubenswrapper[4711]: E1202 10:15:05.079014 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:05 crc kubenswrapper[4711]: E1202 10:15:05.079210 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:05 crc kubenswrapper[4711]: E1202 10:15:05.079352 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.170183 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.170244 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.170262 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.170283 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.170298 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:05Z","lastTransitionTime":"2025-12-02T10:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.273006 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.273052 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.273068 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.273087 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.273099 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:05Z","lastTransitionTime":"2025-12-02T10:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.377218 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.377272 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.377283 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.377303 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.377315 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:05Z","lastTransitionTime":"2025-12-02T10:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.480354 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.480409 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.480421 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.480437 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.480454 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:05Z","lastTransitionTime":"2025-12-02T10:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.583986 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.584049 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.584058 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.584076 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.584097 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:05Z","lastTransitionTime":"2025-12-02T10:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.685977 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.686020 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.686029 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.686043 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.686055 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:05Z","lastTransitionTime":"2025-12-02T10:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.788456 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.788501 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.788512 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.788527 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.788535 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:05Z","lastTransitionTime":"2025-12-02T10:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.895214 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.895275 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.895289 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.895308 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.895325 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:05Z","lastTransitionTime":"2025-12-02T10:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.997792 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.997853 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.997862 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.997876 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:05 crc kubenswrapper[4711]: I1202 10:15:05.997885 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:05Z","lastTransitionTime":"2025-12-02T10:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.101057 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.101104 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.101112 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.101126 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.101135 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:06Z","lastTransitionTime":"2025-12-02T10:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.203474 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.203522 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.203533 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.203550 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.203562 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:06Z","lastTransitionTime":"2025-12-02T10:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.306905 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.306963 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.306975 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.306997 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.307009 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:06Z","lastTransitionTime":"2025-12-02T10:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.408828 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.408869 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.408878 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.408892 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.408902 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:06Z","lastTransitionTime":"2025-12-02T10:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.511162 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.511238 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.511260 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.511297 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.511321 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:06Z","lastTransitionTime":"2025-12-02T10:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.614524 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.614590 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.614600 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.614615 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.614626 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:06Z","lastTransitionTime":"2025-12-02T10:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.716988 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.717044 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.717054 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.717066 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.717102 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:06Z","lastTransitionTime":"2025-12-02T10:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.819180 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.819218 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.819229 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.819243 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.819253 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:06Z","lastTransitionTime":"2025-12-02T10:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.922023 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.922061 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.922069 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.922083 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:06 crc kubenswrapper[4711]: I1202 10:15:06.922092 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:06Z","lastTransitionTime":"2025-12-02T10:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.024081 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.024128 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.024139 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.024155 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.024166 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:07Z","lastTransitionTime":"2025-12-02T10:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.078253 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.078315 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.078268 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.078261 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:07 crc kubenswrapper[4711]: E1202 10:15:07.078406 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:07 crc kubenswrapper[4711]: E1202 10:15:07.078556 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:07 crc kubenswrapper[4711]: E1202 10:15:07.078651 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:07 crc kubenswrapper[4711]: E1202 10:15:07.078735 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.126349 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.126394 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.126405 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.126420 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.126430 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:07Z","lastTransitionTime":"2025-12-02T10:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.229027 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.229074 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.229090 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.229115 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.229134 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:07Z","lastTransitionTime":"2025-12-02T10:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.332004 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.332085 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.332110 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.332143 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.332168 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:07Z","lastTransitionTime":"2025-12-02T10:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.435090 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.435179 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.435216 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.435247 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.435270 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:07Z","lastTransitionTime":"2025-12-02T10:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.537895 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.537973 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.537991 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.538013 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.538029 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:07Z","lastTransitionTime":"2025-12-02T10:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.640830 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.640867 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.640876 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.640888 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.640897 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:07Z","lastTransitionTime":"2025-12-02T10:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.742751 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.742780 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.742787 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.742799 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.742807 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:07Z","lastTransitionTime":"2025-12-02T10:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.845207 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.845293 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.845305 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.845330 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.845353 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:07Z","lastTransitionTime":"2025-12-02T10:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.948451 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.948507 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.948523 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.948539 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:07 crc kubenswrapper[4711]: I1202 10:15:07.948552 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:07Z","lastTransitionTime":"2025-12-02T10:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.051505 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.051550 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.051568 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.051588 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.051603 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:08Z","lastTransitionTime":"2025-12-02T10:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.154669 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.154715 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.154725 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.154742 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.154752 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:08Z","lastTransitionTime":"2025-12-02T10:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.257568 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.257627 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.257639 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.257656 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.257667 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:08Z","lastTransitionTime":"2025-12-02T10:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.360819 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.360865 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.360880 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.360901 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.360916 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:08Z","lastTransitionTime":"2025-12-02T10:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.463757 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.463800 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.463811 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.463828 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.463840 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:08Z","lastTransitionTime":"2025-12-02T10:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.566777 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.566868 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.566891 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.566923 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.566989 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:08Z","lastTransitionTime":"2025-12-02T10:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.669555 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.669602 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.669617 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.669636 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.669645 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:08Z","lastTransitionTime":"2025-12-02T10:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.771856 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.772006 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.772046 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.772081 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.772106 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:08Z","lastTransitionTime":"2025-12-02T10:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.875153 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.875239 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.875268 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.875295 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.875312 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:08Z","lastTransitionTime":"2025-12-02T10:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.980560 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.980618 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.980635 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.980659 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:08 crc kubenswrapper[4711]: I1202 10:15:08.980677 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:08Z","lastTransitionTime":"2025-12-02T10:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.078321 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.078434 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:09 crc kubenswrapper[4711]: E1202 10:15:09.078605 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.078634 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.078655 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:09 crc kubenswrapper[4711]: E1202 10:15:09.078798 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:09 crc kubenswrapper[4711]: E1202 10:15:09.079011 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:09 crc kubenswrapper[4711]: E1202 10:15:09.079107 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.084128 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.084189 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.084212 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.084243 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.084265 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:09Z","lastTransitionTime":"2025-12-02T10:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.186295 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.186343 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.186355 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.186370 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.186382 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:09Z","lastTransitionTime":"2025-12-02T10:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.288978 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.289023 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.289035 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.289049 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.289059 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:09Z","lastTransitionTime":"2025-12-02T10:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.391856 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.391931 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.391992 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.392025 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.392051 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:09Z","lastTransitionTime":"2025-12-02T10:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.490616 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:09 crc kubenswrapper[4711]: E1202 10:15:09.490847 4711 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:15:09 crc kubenswrapper[4711]: E1202 10:15:09.491003 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs podName:87347875-9865-4380-a0ea-3fde5596dce7 nodeName:}" failed. No retries permitted until 2025-12-02 10:16:13.490943309 +0000 UTC m=+163.200309756 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs") pod "network-metrics-daemon-c82q2" (UID: "87347875-9865-4380-a0ea-3fde5596dce7") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.495257 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.495307 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.495318 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.495337 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.495349 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:09Z","lastTransitionTime":"2025-12-02T10:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.597636 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.597669 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.597678 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.597691 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.597700 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:09Z","lastTransitionTime":"2025-12-02T10:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.699667 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.699724 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.699736 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.699751 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.699764 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:09Z","lastTransitionTime":"2025-12-02T10:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.802467 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.802577 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.802595 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.802612 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.802624 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:09Z","lastTransitionTime":"2025-12-02T10:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.905991 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.906086 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.906111 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.906146 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:09 crc kubenswrapper[4711]: I1202 10:15:09.906165 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:09Z","lastTransitionTime":"2025-12-02T10:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.009730 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.009777 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.009800 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.009827 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.009842 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:10Z","lastTransitionTime":"2025-12-02T10:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.112605 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.112689 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.112715 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.112748 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.112772 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:10Z","lastTransitionTime":"2025-12-02T10:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.215683 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.215761 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.215784 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.215814 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.215836 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:10Z","lastTransitionTime":"2025-12-02T10:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.318492 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.318536 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.318550 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.318568 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.318580 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:10Z","lastTransitionTime":"2025-12-02T10:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.421937 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.422019 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.422034 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.422053 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.422066 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:10Z","lastTransitionTime":"2025-12-02T10:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.524801 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.524854 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.524866 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.524883 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.524896 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:10Z","lastTransitionTime":"2025-12-02T10:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.627862 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.627911 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.627925 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.627943 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.627979 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:10Z","lastTransitionTime":"2025-12-02T10:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.730238 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.730293 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.730310 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.730332 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.730349 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:10Z","lastTransitionTime":"2025-12-02T10:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.833164 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.833256 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.833268 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.833284 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.833295 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:10Z","lastTransitionTime":"2025-12-02T10:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.936542 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.936601 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.936625 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.936655 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:10 crc kubenswrapper[4711]: I1202 10:15:10.936674 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:10Z","lastTransitionTime":"2025-12-02T10:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.040699 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.040790 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.040816 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.040848 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.040885 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:11Z","lastTransitionTime":"2025-12-02T10:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.077584 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.077732 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.077599 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:11 crc kubenswrapper[4711]: E1202 10:15:11.077885 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.077998 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:11 crc kubenswrapper[4711]: E1202 10:15:11.078104 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:11 crc kubenswrapper[4711]: E1202 10:15:11.078280 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:11 crc kubenswrapper[4711]: E1202 10:15:11.078399 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.095851 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efbaf74e-a4b4-4086-8c79-f0f09ee085c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1751b4e38ded76090fb3a243c0fa3a375b89a9c209389972654301f94e19c7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f1e0bcca29c600cbaa0553971a4dfb99d29623b49370b3338a394072c6977f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://650a66281183f9ced391a018af3fe693dfffa393c8c1260d90c7ca5c4cabd181\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.106436 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g7srl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfcbeeaf-d773-49ac-bae3-b457ca7847d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71cb0917e7c8e67a4475d1bd4be1f2acb36a0a46dfff0d78bd7cf299a17b3c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bbb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g7srl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.122825 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.136508 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb30f9c-7935-4f91-84ef-5259aa64c7b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ecb003269e24e900f5822fa78bb70784ec04c62e3b8da13b3c86952c6fa453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8837406d54e473d2089c08f375d01ad0e1786e840dc961457d87ff7c6ec702a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99befd80b165fd81450d3fdee3346261ccb152822339d26aa0783c074af6b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c948a2ac5ad09bdcd3e31148ec7fdf07aa4ce77e9784092d9438e84aaa144616\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.144106 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.144172 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.144191 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.144248 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.144266 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:11Z","lastTransitionTime":"2025-12-02T10:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.152620 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c21a1eeda28585ee499116b0965c2b986189ba14e0637da49632def9b8abe21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.167527 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.183824 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d793911cfad4a7e48b0e8340bfe2ac4fb27d42e88b55a4feec8c3da1a805bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.196475 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hcx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d542278-a5d9-41cd-b125-774fc4cbdb1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979bb8ace840e987594ddd691678a6e2c705f03df918eec5074f951de21a1d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97mvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hcx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.209388 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3185c5f0-a2f0-4322-983e-9bfa09bd083a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://283095cee32f35a91fa0b5cf45589e99f36719c211a1d5890567377b23f2b33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f467bfde3c4938f675c9deac1bd6d3944f43ab489d6027734e4dbab2a7ad400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.230991 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df66760-8ea8-4a84-8bb8-3b44fa8c7ee9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86f8309e684edee2d9bcccea8a243aec2fd22a960fa88b380b4cd81c5397b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5762c74d517129793a7d1acf457aadf331bb3d7491e15f4e5c1442522d947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7131b78d6a9faa52aabc4226ec07a02eb614d8ccc3e01f5642253310e8cf017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5d72699a62ad1f54495a7388799d20e0d38a042c5772d381c64f0cc450da80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc1a270a474908b3dd4e13fb4c5f885ea43ab9eaef7763b9398b9624a56adcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53842f72a9e6ad0de022a528398fc7e9662384e5921b7ce01efc257084710a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53842f72a9e6ad0de022a528398fc7e9662384e5921b7ce01efc257084710a27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64950403a18cc32d780be0336b27d3a3c27d1b4fd80abb94bd9cbf181ecfc56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64950403a18cc32d780be0336b27d3a3c27d1b4fd80abb94bd9cbf181ecfc56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b31452bd3870e1a14d4f9224bc3c80b225efea8e5f9d362c79eb5d24a275860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b31452bd3870e1a14d4f9224bc3c80b225efea8e5f9d362c79eb5d24a275860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.244252 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bd7360-ad0d-4725-84e3-28c7ba7e3695\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:13:44Z\\\",\\\"message\\\":\\\"W1202 10:13:33.813262 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 10:13:33.813778 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764670413 cert, and key in /tmp/serving-cert-3803279439/serving-signer.crt, /tmp/serving-cert-3803279439/serving-signer.key\\\\nI1202 10:13:34.045125 1 observer_polling.go:159] Starting file observer\\\\nW1202 10:13:34.061982 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 10:13:34.062123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:13:34.065502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3803279439/tls.crt::/tmp/serving-cert-3803279439/tls.key\\\\\\\"\\\\nF1202 10:13:44.585475 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.246116 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.246169 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.246186 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.246213 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.246232 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:11Z","lastTransitionTime":"2025-12-02T10:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.256113 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c82q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87347875-9865-4380-a0ea-3fde5596dce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nhx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c82q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.269137 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0641e884-c845-499c-9ce6-0c4f1a893b5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869e096b5441bc01aa7071556d3127be787ae1c5b8bf8bf175b292f113cd6dbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8xqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9b9cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.282316 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4qrj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab88a2-3875-44a4-a926-7c76836b51b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:42Z\\\",\\\"message\\\":\\\"2025-12-02T10:13:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc\\\\n2025-12-02T10:13:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_023530bd-c729-4db2-8a91-f71d05a2dedc to /host/opt/cni/bin/\\\\n2025-12-02T10:13:57Z [verbose] multus-daemon started\\\\n2025-12-02T10:13:57Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:14:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4qrj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.302056 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"064b98c4-b388-4c62-bcbc-11037274acdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:14:49Z\\\",\\\"message\\\":\\\"t:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128010 6989 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128062 6989 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1202 10:14:49.128104 6989 model_client.go:382] Update operations generated as: [{Op:update Table:NA\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68skn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n6sdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.313376 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6d8705-9138-499d-bacc-6464f4cca9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d0707ef7c5662411507d6df4854ce8d55246df8bb167b477526f3eb50e24eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0e25794b50884a92517a6e6b43390f290a00ad5b6b9705f3ea141953162d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9jv6b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:14:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rh62s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.325756 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f372c5f7fc7b945898319d60236591340d902ddcb55825f20bf394336b764b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aa0f5abe67f657f2b6908d939db2d8f51143f5977b1fc8d2ad0008fbe033f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:13:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.336838 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.349380 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.349418 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.349429 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.349445 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.349455 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:11Z","lastTransitionTime":"2025-12-02T10:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.350903 4711 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9aece8-a05e-47ea-ab7f-b906e93c71c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505cf629e2bf067fc1464b97eb96601b9c1e2865680b867535e3dcae3f8d1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:14:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5415b430f462a987cb14254027f08897326998a1993de13d814345d98267a0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19a21d9a0d28f490c076771d7e4e25aab8984d5172ac806efd044f391ee0828f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b343d05860a43f3fd770a8014f239549c59872755caf6ac3054b06cc1e7b097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://993c0789f1f7f56b189eb5c5bdf9102d5d47f8f0e0d559c9df2eb88afebf9bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:13:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:13:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379bfa1da7a572018da088973398177e69d92e62345659264b0788ea059ced2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1df283d39b989ae7831e0fec73e88521213667d6835f2a6c37c50aeac76feda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:14:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:14:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwpz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:13:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5xjmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:11Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.452317 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.452369 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.452380 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.452394 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.452403 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:11Z","lastTransitionTime":"2025-12-02T10:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.555062 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.555143 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.555157 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.555172 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.555182 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:11Z","lastTransitionTime":"2025-12-02T10:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.657686 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.657725 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.657736 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.657753 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.657762 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:11Z","lastTransitionTime":"2025-12-02T10:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.761063 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.761117 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.761129 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.761148 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.761161 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:11Z","lastTransitionTime":"2025-12-02T10:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.863495 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.863558 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.863570 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.863585 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.863598 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:11Z","lastTransitionTime":"2025-12-02T10:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.966369 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.966442 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.966464 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.966490 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:11 crc kubenswrapper[4711]: I1202 10:15:11.966511 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:11Z","lastTransitionTime":"2025-12-02T10:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.069484 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.069568 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.069600 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.069629 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.069655 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.174068 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.174125 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.174139 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.174161 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.174176 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.277053 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.277105 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.277115 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.277129 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.277138 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.379443 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.379516 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.379537 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.379566 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.379583 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.481756 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.481795 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.481806 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.481821 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.481831 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.584846 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.584889 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.584901 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.584917 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.584928 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.687400 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.687449 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.687461 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.687480 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.687492 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.789741 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.789777 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.789786 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.789818 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.789827 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.892529 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.892618 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.892638 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.892671 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.892688 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.936488 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.936530 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.936542 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.936559 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.936571 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: E1202 10:15:12.952843 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:12Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.956589 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.956643 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.956652 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.956667 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.956676 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: E1202 10:15:12.971619 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:12Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.975399 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.975430 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.975439 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.975451 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.975460 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:12 crc kubenswrapper[4711]: E1202 10:15:12.986793 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:12Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.990539 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.990577 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.990589 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.990606 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:12 crc kubenswrapper[4711]: I1202 10:15:12.990618 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:12Z","lastTransitionTime":"2025-12-02T10:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:13 crc kubenswrapper[4711]: E1202 10:15:13.002292 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:13Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.005383 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.005412 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.005421 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.005437 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.005448 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:13Z","lastTransitionTime":"2025-12-02T10:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:13 crc kubenswrapper[4711]: E1202 10:15:13.016475 4711 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:15:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:15:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ef8c7e4-3cdb-42fb-8de0-8476dd0f383a\\\",\\\"systemUUID\\\":\\\"587f9aad-9cef-4053-bfa7-cda655f69c36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:15:13Z is after 2025-08-24T17:21:41Z" Dec 02 10:15:13 crc kubenswrapper[4711]: E1202 10:15:13.016657 4711 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.018075 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.018107 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.018119 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.018134 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.018145 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:13Z","lastTransitionTime":"2025-12-02T10:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.078196 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.078275 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.078324 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.078201 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:13 crc kubenswrapper[4711]: E1202 10:15:13.078547 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:13 crc kubenswrapper[4711]: E1202 10:15:13.078663 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:13 crc kubenswrapper[4711]: E1202 10:15:13.078823 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:13 crc kubenswrapper[4711]: E1202 10:15:13.078922 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.121455 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.121505 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.121517 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.121536 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.121548 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:13Z","lastTransitionTime":"2025-12-02T10:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.225690 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.225786 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.225800 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.225826 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.225841 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:13Z","lastTransitionTime":"2025-12-02T10:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.329074 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.329115 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.329126 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.329143 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.329155 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:13Z","lastTransitionTime":"2025-12-02T10:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.432306 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.432384 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.432403 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.432426 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.432446 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:13Z","lastTransitionTime":"2025-12-02T10:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.535303 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.535361 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.535379 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.535399 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.535416 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:13Z","lastTransitionTime":"2025-12-02T10:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.638190 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.638251 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.638273 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.638304 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.638326 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:13Z","lastTransitionTime":"2025-12-02T10:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.741207 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.741247 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.741256 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.741271 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.741281 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:13Z","lastTransitionTime":"2025-12-02T10:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.844704 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.844742 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.844753 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.844770 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.844781 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:13Z","lastTransitionTime":"2025-12-02T10:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.947363 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.947412 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.947424 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.947442 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:13 crc kubenswrapper[4711]: I1202 10:15:13.947454 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:13Z","lastTransitionTime":"2025-12-02T10:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.050172 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.050199 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.050207 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.050222 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.050238 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:14Z","lastTransitionTime":"2025-12-02T10:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.153246 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.153537 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.153618 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.153702 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.153783 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:14Z","lastTransitionTime":"2025-12-02T10:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.256732 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.256786 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.256802 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.256821 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.256834 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:14Z","lastTransitionTime":"2025-12-02T10:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.359864 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.360380 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.360551 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.360722 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.360876 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:14Z","lastTransitionTime":"2025-12-02T10:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.464004 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.464055 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.464069 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.464088 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.464099 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:14Z","lastTransitionTime":"2025-12-02T10:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.566673 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.566746 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.566769 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.566799 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.566820 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:14Z","lastTransitionTime":"2025-12-02T10:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.669283 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.669320 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.669330 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.669346 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.669356 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:14Z","lastTransitionTime":"2025-12-02T10:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.771857 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.771922 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.771942 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.772005 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.772023 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:14Z","lastTransitionTime":"2025-12-02T10:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.874627 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.874654 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.874663 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.874676 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.874686 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:14Z","lastTransitionTime":"2025-12-02T10:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.978521 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.978594 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.978612 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.978678 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:14 crc kubenswrapper[4711]: I1202 10:15:14.978698 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:14Z","lastTransitionTime":"2025-12-02T10:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.077474 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.077501 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.077589 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.077713 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:15 crc kubenswrapper[4711]: E1202 10:15:15.077890 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:15 crc kubenswrapper[4711]: E1202 10:15:15.078056 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:15 crc kubenswrapper[4711]: E1202 10:15:15.078285 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:15 crc kubenswrapper[4711]: E1202 10:15:15.078536 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.080604 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.080667 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.080689 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.080718 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.080739 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:15Z","lastTransitionTime":"2025-12-02T10:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.183898 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.183970 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.183986 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.184007 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.184021 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:15Z","lastTransitionTime":"2025-12-02T10:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.286460 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.286706 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.286786 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.286850 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.286920 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:15Z","lastTransitionTime":"2025-12-02T10:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.389876 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.390256 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.390405 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.390555 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.390687 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:15Z","lastTransitionTime":"2025-12-02T10:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.493560 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.493880 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.494037 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.494208 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.494350 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:15Z","lastTransitionTime":"2025-12-02T10:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.596153 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.596680 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.596785 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.596876 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.596980 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:15Z","lastTransitionTime":"2025-12-02T10:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.699293 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.699326 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.699337 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.699353 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.699363 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:15Z","lastTransitionTime":"2025-12-02T10:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.802037 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.802476 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.802628 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.802757 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.802901 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:15Z","lastTransitionTime":"2025-12-02T10:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.906233 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.906353 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.906409 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.906431 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:15 crc kubenswrapper[4711]: I1202 10:15:15.906445 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:15Z","lastTransitionTime":"2025-12-02T10:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.008921 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.009014 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.009033 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.009056 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.009071 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:16Z","lastTransitionTime":"2025-12-02T10:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.111817 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.111867 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.111882 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.111902 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.111917 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:16Z","lastTransitionTime":"2025-12-02T10:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.229456 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.229637 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.229669 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.229732 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.229752 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:16Z","lastTransitionTime":"2025-12-02T10:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.331980 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.332028 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.332039 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.332054 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.332065 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:16Z","lastTransitionTime":"2025-12-02T10:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.434307 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.434344 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.434353 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.434370 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.434379 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:16Z","lastTransitionTime":"2025-12-02T10:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.536680 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.536735 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.536746 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.536763 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.536774 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:16Z","lastTransitionTime":"2025-12-02T10:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.639169 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.639211 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.639223 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.639241 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.639253 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:16Z","lastTransitionTime":"2025-12-02T10:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.741933 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.742027 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.742051 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.742078 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.742096 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:16Z","lastTransitionTime":"2025-12-02T10:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.846182 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.846227 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.846236 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.846252 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.846263 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:16Z","lastTransitionTime":"2025-12-02T10:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.948751 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.948791 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.948802 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.948819 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:16 crc kubenswrapper[4711]: I1202 10:15:16.948830 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:16Z","lastTransitionTime":"2025-12-02T10:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.051314 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.051355 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.051367 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.051383 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.051394 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:17Z","lastTransitionTime":"2025-12-02T10:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.078193 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.078205 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.078216 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.078527 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:17 crc kubenswrapper[4711]: E1202 10:15:17.078705 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:17 crc kubenswrapper[4711]: E1202 10:15:17.078766 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:17 crc kubenswrapper[4711]: E1202 10:15:17.078835 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:17 crc kubenswrapper[4711]: E1202 10:15:17.078882 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.079047 4711 scope.go:117] "RemoveContainer" containerID="24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726" Dec 02 10:15:17 crc kubenswrapper[4711]: E1202 10:15:17.079250 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n6sdh_openshift-ovn-kubernetes(064b98c4-b388-4c62-bcbc-11037274acdb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.154328 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.154386 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.154400 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.154421 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.154434 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:17Z","lastTransitionTime":"2025-12-02T10:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.256887 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.256928 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.256945 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.256992 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.257006 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:17Z","lastTransitionTime":"2025-12-02T10:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.359695 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.359750 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.359762 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.359780 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.359792 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:17Z","lastTransitionTime":"2025-12-02T10:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.462496 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.462552 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.462562 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.462575 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.462610 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:17Z","lastTransitionTime":"2025-12-02T10:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.565091 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.565142 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.565153 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.565169 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.565179 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:17Z","lastTransitionTime":"2025-12-02T10:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.668793 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.668873 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.668891 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.668928 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.668942 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:17Z","lastTransitionTime":"2025-12-02T10:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.771195 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.771254 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.771272 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.771300 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.771318 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:17Z","lastTransitionTime":"2025-12-02T10:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.875570 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.875646 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.875662 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.875691 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.875703 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:17Z","lastTransitionTime":"2025-12-02T10:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.978758 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.978800 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.978810 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.978825 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:17 crc kubenswrapper[4711]: I1202 10:15:17.978837 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:17Z","lastTransitionTime":"2025-12-02T10:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.081305 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.081342 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.081351 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.081373 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.081382 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:18Z","lastTransitionTime":"2025-12-02T10:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.184525 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.184570 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.184581 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.184594 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.184603 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:18Z","lastTransitionTime":"2025-12-02T10:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.286576 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.286620 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.286628 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.286641 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.286650 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:18Z","lastTransitionTime":"2025-12-02T10:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.393119 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.393174 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.393183 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.393204 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.393215 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:18Z","lastTransitionTime":"2025-12-02T10:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.495682 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.495736 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.495748 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.495769 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.495782 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:18Z","lastTransitionTime":"2025-12-02T10:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.597850 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.597929 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.597967 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.597990 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.598005 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:18Z","lastTransitionTime":"2025-12-02T10:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.700563 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.700682 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.700700 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.700723 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.700744 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:18Z","lastTransitionTime":"2025-12-02T10:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.804342 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.804399 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.804414 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.804432 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.804444 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:18Z","lastTransitionTime":"2025-12-02T10:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.908223 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.908357 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.908370 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.908455 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:18 crc kubenswrapper[4711]: I1202 10:15:18.908502 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:18Z","lastTransitionTime":"2025-12-02T10:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.011588 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.011643 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.011659 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.011685 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.011704 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:19Z","lastTransitionTime":"2025-12-02T10:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.078236 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.078712 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.078781 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.078906 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:19 crc kubenswrapper[4711]: E1202 10:15:19.079110 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:19 crc kubenswrapper[4711]: E1202 10:15:19.079289 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:19 crc kubenswrapper[4711]: E1202 10:15:19.079567 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:19 crc kubenswrapper[4711]: E1202 10:15:19.079710 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.120468 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.120537 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.120623 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.120659 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.120684 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:19Z","lastTransitionTime":"2025-12-02T10:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.224110 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.224175 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.224189 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.224207 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.224217 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:19Z","lastTransitionTime":"2025-12-02T10:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.326365 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.326405 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.326447 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.326462 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.326471 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:19Z","lastTransitionTime":"2025-12-02T10:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.429851 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.430231 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.430354 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.430492 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.430604 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:19Z","lastTransitionTime":"2025-12-02T10:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.532431 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.532485 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.532498 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.532519 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.532531 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:19Z","lastTransitionTime":"2025-12-02T10:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.635904 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.636027 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.636042 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.636062 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.636075 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:19Z","lastTransitionTime":"2025-12-02T10:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.739491 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.739545 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.739558 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.739580 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.739594 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:19Z","lastTransitionTime":"2025-12-02T10:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.842424 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.842839 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.843099 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.843315 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.843449 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:19Z","lastTransitionTime":"2025-12-02T10:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.946460 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.946494 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.946502 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.946516 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:19 crc kubenswrapper[4711]: I1202 10:15:19.946525 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:19Z","lastTransitionTime":"2025-12-02T10:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.049288 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.049327 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.049339 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.049357 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.049371 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:20Z","lastTransitionTime":"2025-12-02T10:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.152161 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.152214 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.152233 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.152258 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.152275 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:20Z","lastTransitionTime":"2025-12-02T10:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.256154 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.256491 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.256696 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.256874 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.257063 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:20Z","lastTransitionTime":"2025-12-02T10:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.360899 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.361269 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.361396 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.361527 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.362106 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:20Z","lastTransitionTime":"2025-12-02T10:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.463806 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.463836 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.463844 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.463857 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.463866 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:20Z","lastTransitionTime":"2025-12-02T10:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.566476 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.566523 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.566539 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.566562 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.566580 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:20Z","lastTransitionTime":"2025-12-02T10:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.670056 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.670108 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.670120 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.670140 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.670152 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:20Z","lastTransitionTime":"2025-12-02T10:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.772388 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.773141 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.773182 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.773205 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.773225 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:20Z","lastTransitionTime":"2025-12-02T10:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.877066 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.877146 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.877157 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.877177 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.877191 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:20Z","lastTransitionTime":"2025-12-02T10:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.980567 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.980627 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.980640 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.980661 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:20 crc kubenswrapper[4711]: I1202 10:15:20.980674 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:20Z","lastTransitionTime":"2025-12-02T10:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.078320 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:21 crc kubenswrapper[4711]: E1202 10:15:21.078580 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.078595 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.078713 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:21 crc kubenswrapper[4711]: E1202 10:15:21.078777 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.078727 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:21 crc kubenswrapper[4711]: E1202 10:15:21.078929 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:21 crc kubenswrapper[4711]: E1202 10:15:21.079128 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.083681 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.083727 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.083743 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.083767 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.083785 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:21Z","lastTransitionTime":"2025-12-02T10:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.130176 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=36.130117399 podStartE2EDuration="36.130117399s" podCreationTimestamp="2025-12-02 10:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:21.130015175 +0000 UTC m=+110.839381632" watchObservedRunningTime="2025-12-02 10:15:21.130117399 +0000 UTC m=+110.839483866" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.163838 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=20.163817406 podStartE2EDuration="20.163817406s" podCreationTimestamp="2025-12-02 10:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:21.163110036 +0000 UTC m=+110.872476563" watchObservedRunningTime="2025-12-02 10:15:21.163817406 +0000 UTC m=+110.873183853" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.187441 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.187665 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.187729 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.187750 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.187762 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:21Z","lastTransitionTime":"2025-12-02T10:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.189497 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.189477954 podStartE2EDuration="1m30.189477954s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:21.189453273 +0000 UTC m=+110.898819760" watchObservedRunningTime="2025-12-02 10:15:21.189477954 +0000 UTC m=+110.898844411" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.229581 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=64.229536564 podStartE2EDuration="1m4.229536564s" podCreationTimestamp="2025-12-02 10:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:21.20585765 +0000 UTC m=+110.915224107" watchObservedRunningTime="2025-12-02 10:15:21.229536564 +0000 UTC m=+110.938903011" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.272744 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hcx25" podStartSLOduration=91.272714058 podStartE2EDuration="1m31.272714058s" podCreationTimestamp="2025-12-02 10:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:21.272480702 +0000 UTC m=+110.981847189" watchObservedRunningTime="2025-12-02 10:15:21.272714058 +0000 UTC m=+110.982080505" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.286829 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podStartSLOduration=90.286806031 podStartE2EDuration="1m30.286806031s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:21.286467302 +0000 UTC m=+110.995833759" watchObservedRunningTime="2025-12-02 10:15:21.286806031 +0000 UTC m=+110.996172488" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.290894 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.290934 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.290943 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.290974 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.290985 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:21Z","lastTransitionTime":"2025-12-02T10:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.330099 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4qrj7" podStartSLOduration=90.330067228 podStartE2EDuration="1m30.330067228s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:21.304471752 +0000 UTC m=+111.013838219" watchObservedRunningTime="2025-12-02 10:15:21.330067228 +0000 UTC m=+111.039433715" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.392934 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.392978 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.392989 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.393002 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.393011 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:21Z","lastTransitionTime":"2025-12-02T10:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.439377 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5xjmc" podStartSLOduration=90.439352312 podStartE2EDuration="1m30.439352312s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:21.438443608 +0000 UTC m=+111.147810055" watchObservedRunningTime="2025-12-02 10:15:21.439352312 +0000 UTC m=+111.148718769" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.458400 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rh62s" podStartSLOduration=90.45838194 podStartE2EDuration="1m30.45838194s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:21.456821878 +0000 UTC m=+111.166188325" watchObservedRunningTime="2025-12-02 10:15:21.45838194 +0000 UTC m=+111.167748387" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.472519 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.472500574 podStartE2EDuration="1m29.472500574s" podCreationTimestamp="2025-12-02 10:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:21.47164144 +0000 UTC m=+111.181007897" watchObservedRunningTime="2025-12-02 10:15:21.472500574 +0000 UTC m=+111.181867021" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.482599 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-g7srl" podStartSLOduration=90.482576358 podStartE2EDuration="1m30.482576358s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:21.482204618 +0000 UTC m=+111.191571075" watchObservedRunningTime="2025-12-02 10:15:21.482576358 +0000 UTC m=+111.191942805" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.495081 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.495153 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.495166 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.495183 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.495194 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:21Z","lastTransitionTime":"2025-12-02T10:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.597425 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.597467 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.597478 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.597498 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.597509 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:21Z","lastTransitionTime":"2025-12-02T10:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.699449 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.699520 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.699538 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.699564 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.699582 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:21Z","lastTransitionTime":"2025-12-02T10:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.802641 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.802732 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.802750 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.802810 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.802832 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:21Z","lastTransitionTime":"2025-12-02T10:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.906127 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.906193 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.906210 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.906234 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:21 crc kubenswrapper[4711]: I1202 10:15:21.906252 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:21Z","lastTransitionTime":"2025-12-02T10:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.008519 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.008577 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.008604 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.008631 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.008647 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:22Z","lastTransitionTime":"2025-12-02T10:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.111509 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.111578 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.111595 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.111621 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.111639 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:22Z","lastTransitionTime":"2025-12-02T10:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.213892 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.213959 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.213975 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.213998 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.214009 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:22Z","lastTransitionTime":"2025-12-02T10:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.316633 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.316697 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.316718 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.316744 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.316763 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:22Z","lastTransitionTime":"2025-12-02T10:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.420803 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.421104 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.421206 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.421293 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.421369 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:22Z","lastTransitionTime":"2025-12-02T10:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.523608 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.523646 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.523654 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.523669 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.523678 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:22Z","lastTransitionTime":"2025-12-02T10:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.626448 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.626740 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.626908 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.627073 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.627176 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:22Z","lastTransitionTime":"2025-12-02T10:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.729733 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.729762 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.729771 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.729784 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.729793 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:22Z","lastTransitionTime":"2025-12-02T10:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.832307 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.832344 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.832356 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.832374 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.832384 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:22Z","lastTransitionTime":"2025-12-02T10:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.934970 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.935008 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.935019 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.935037 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:22 crc kubenswrapper[4711]: I1202 10:15:22.935049 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:22Z","lastTransitionTime":"2025-12-02T10:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.038162 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.038202 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.038211 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.038229 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.038241 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:23Z","lastTransitionTime":"2025-12-02T10:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.077866 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.078080 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:23 crc kubenswrapper[4711]: E1202 10:15:23.078319 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.078225 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.078203 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:23 crc kubenswrapper[4711]: E1202 10:15:23.078551 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:23 crc kubenswrapper[4711]: E1202 10:15:23.078634 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:23 crc kubenswrapper[4711]: E1202 10:15:23.078782 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.139894 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.139937 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.139966 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.139983 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.140006 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:23Z","lastTransitionTime":"2025-12-02T10:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.184109 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.184402 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.184495 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.184628 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.184724 4711 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:15:23Z","lastTransitionTime":"2025-12-02T10:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.229047 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n"] Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.229615 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.232856 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.233232 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.233408 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.233788 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.343701 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/960c3640-df20-492e-bbe4-5a1f9a591b1c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.343745 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/960c3640-df20-492e-bbe4-5a1f9a591b1c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.343766 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/960c3640-df20-492e-bbe4-5a1f9a591b1c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.343790 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/960c3640-df20-492e-bbe4-5a1f9a591b1c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.343834 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/960c3640-df20-492e-bbe4-5a1f9a591b1c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.444816 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/960c3640-df20-492e-bbe4-5a1f9a591b1c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.444873 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/960c3640-df20-492e-bbe4-5a1f9a591b1c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.444909 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/960c3640-df20-492e-bbe4-5a1f9a591b1c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.444932 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/960c3640-df20-492e-bbe4-5a1f9a591b1c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.444988 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/960c3640-df20-492e-bbe4-5a1f9a591b1c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.445243 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/960c3640-df20-492e-bbe4-5a1f9a591b1c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.445246 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/960c3640-df20-492e-bbe4-5a1f9a591b1c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.446078 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/960c3640-df20-492e-bbe4-5a1f9a591b1c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.454578 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/960c3640-df20-492e-bbe4-5a1f9a591b1c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.465706 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/960c3640-df20-492e-bbe4-5a1f9a591b1c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r7v4n\" (UID: \"960c3640-df20-492e-bbe4-5a1f9a591b1c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.545581 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.780942 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" event={"ID":"960c3640-df20-492e-bbe4-5a1f9a591b1c","Type":"ContainerStarted","Data":"ac04ae51b4d3ee49d463fb0ea51c9f2ce82b5646261808205de549df222e9ed1"} Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.781017 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" event={"ID":"960c3640-df20-492e-bbe4-5a1f9a591b1c","Type":"ContainerStarted","Data":"df4212412f239dc236e59f25fbc2f0b7e4119c886e99ab3d452b33f37757c392"} Dec 02 10:15:23 crc kubenswrapper[4711]: I1202 10:15:23.794522 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r7v4n" podStartSLOduration=92.794503663 podStartE2EDuration="1m32.794503663s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:23.793733711 +0000 UTC m=+113.503100188" watchObservedRunningTime="2025-12-02 10:15:23.794503663 +0000 UTC m=+113.503870110" Dec 02 10:15:25 crc kubenswrapper[4711]: I1202 10:15:25.078133 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:25 crc kubenswrapper[4711]: I1202 10:15:25.078184 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:25 crc kubenswrapper[4711]: I1202 10:15:25.078179 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:25 crc kubenswrapper[4711]: E1202 10:15:25.078274 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:25 crc kubenswrapper[4711]: E1202 10:15:25.078371 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:25 crc kubenswrapper[4711]: I1202 10:15:25.078402 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:25 crc kubenswrapper[4711]: E1202 10:15:25.078463 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:25 crc kubenswrapper[4711]: E1202 10:15:25.078531 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:27 crc kubenswrapper[4711]: I1202 10:15:27.078257 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:27 crc kubenswrapper[4711]: I1202 10:15:27.078293 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:27 crc kubenswrapper[4711]: I1202 10:15:27.078338 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:27 crc kubenswrapper[4711]: I1202 10:15:27.078258 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:27 crc kubenswrapper[4711]: E1202 10:15:27.078407 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:27 crc kubenswrapper[4711]: E1202 10:15:27.078536 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:27 crc kubenswrapper[4711]: E1202 10:15:27.078643 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:27 crc kubenswrapper[4711]: E1202 10:15:27.078728 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:28 crc kubenswrapper[4711]: I1202 10:15:28.796830 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4qrj7_2fab88a2-3875-44a4-a926-7c76836b51b8/kube-multus/1.log" Dec 02 10:15:28 crc kubenswrapper[4711]: I1202 10:15:28.797275 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4qrj7_2fab88a2-3875-44a4-a926-7c76836b51b8/kube-multus/0.log" Dec 02 10:15:28 crc kubenswrapper[4711]: I1202 10:15:28.797324 4711 generic.go:334] "Generic (PLEG): container finished" podID="2fab88a2-3875-44a4-a926-7c76836b51b8" containerID="783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6" exitCode=1 Dec 02 10:15:28 crc kubenswrapper[4711]: I1202 10:15:28.797367 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4qrj7" event={"ID":"2fab88a2-3875-44a4-a926-7c76836b51b8","Type":"ContainerDied","Data":"783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6"} Dec 02 10:15:28 crc kubenswrapper[4711]: I1202 10:15:28.797422 4711 scope.go:117] "RemoveContainer" containerID="04f8eaafe98a82d18156dd4da5d446616a1ae3d2d20665b8d586d532282c40e7" Dec 02 10:15:28 crc kubenswrapper[4711]: I1202 10:15:28.798139 4711 scope.go:117] "RemoveContainer" containerID="783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6" Dec 02 10:15:28 crc kubenswrapper[4711]: E1202 10:15:28.798332 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4qrj7_openshift-multus(2fab88a2-3875-44a4-a926-7c76836b51b8)\"" pod="openshift-multus/multus-4qrj7" podUID="2fab88a2-3875-44a4-a926-7c76836b51b8" Dec 02 10:15:29 crc kubenswrapper[4711]: I1202 10:15:29.078051 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:29 crc kubenswrapper[4711]: I1202 10:15:29.078111 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:29 crc kubenswrapper[4711]: I1202 10:15:29.078126 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:29 crc kubenswrapper[4711]: I1202 10:15:29.078100 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:29 crc kubenswrapper[4711]: E1202 10:15:29.078279 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:29 crc kubenswrapper[4711]: E1202 10:15:29.078430 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:29 crc kubenswrapper[4711]: E1202 10:15:29.078549 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:29 crc kubenswrapper[4711]: E1202 10:15:29.078747 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:29 crc kubenswrapper[4711]: I1202 10:15:29.803380 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4qrj7_2fab88a2-3875-44a4-a926-7c76836b51b8/kube-multus/1.log" Dec 02 10:15:31 crc kubenswrapper[4711]: E1202 10:15:31.069156 4711 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 02 10:15:31 crc kubenswrapper[4711]: I1202 10:15:31.077632 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:31 crc kubenswrapper[4711]: I1202 10:15:31.081055 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:31 crc kubenswrapper[4711]: I1202 10:15:31.081078 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:31 crc kubenswrapper[4711]: I1202 10:15:31.081054 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:31 crc kubenswrapper[4711]: E1202 10:15:31.081216 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:31 crc kubenswrapper[4711]: E1202 10:15:31.081453 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:31 crc kubenswrapper[4711]: E1202 10:15:31.081719 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:31 crc kubenswrapper[4711]: E1202 10:15:31.081595 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:31 crc kubenswrapper[4711]: E1202 10:15:31.220514 4711 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 10:15:32 crc kubenswrapper[4711]: I1202 10:15:32.077992 4711 scope.go:117] "RemoveContainer" containerID="24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726" Dec 02 10:15:32 crc kubenswrapper[4711]: I1202 10:15:32.920527 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c82q2"] Dec 02 10:15:32 crc kubenswrapper[4711]: I1202 10:15:32.920975 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:32 crc kubenswrapper[4711]: E1202 10:15:32.921090 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:33 crc kubenswrapper[4711]: I1202 10:15:33.070098 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/3.log" Dec 02 10:15:33 crc kubenswrapper[4711]: I1202 10:15:33.073152 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerStarted","Data":"d1ed0bad318795fe5c82bf6bdb102e95cf4225a843f297d4a2cf129f71292667"} Dec 02 10:15:33 crc kubenswrapper[4711]: I1202 10:15:33.073510 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:15:33 crc kubenswrapper[4711]: I1202 10:15:33.079142 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:33 crc kubenswrapper[4711]: I1202 10:15:33.079186 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:33 crc kubenswrapper[4711]: I1202 10:15:33.079215 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:33 crc kubenswrapper[4711]: E1202 10:15:33.079246 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:33 crc kubenswrapper[4711]: E1202 10:15:33.079411 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:33 crc kubenswrapper[4711]: E1202 10:15:33.079481 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:34 crc kubenswrapper[4711]: I1202 10:15:34.077359 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:34 crc kubenswrapper[4711]: E1202 10:15:34.077582 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:35 crc kubenswrapper[4711]: I1202 10:15:35.077430 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:35 crc kubenswrapper[4711]: I1202 10:15:35.077544 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:35 crc kubenswrapper[4711]: E1202 10:15:35.077617 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:35 crc kubenswrapper[4711]: E1202 10:15:35.077701 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:35 crc kubenswrapper[4711]: I1202 10:15:35.077772 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:35 crc kubenswrapper[4711]: E1202 10:15:35.077892 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:36 crc kubenswrapper[4711]: I1202 10:15:36.077746 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:36 crc kubenswrapper[4711]: E1202 10:15:36.078030 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:36 crc kubenswrapper[4711]: E1202 10:15:36.222149 4711 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 10:15:37 crc kubenswrapper[4711]: I1202 10:15:37.077587 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:37 crc kubenswrapper[4711]: E1202 10:15:37.077934 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:37 crc kubenswrapper[4711]: I1202 10:15:37.077667 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:37 crc kubenswrapper[4711]: I1202 10:15:37.077725 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:37 crc kubenswrapper[4711]: E1202 10:15:37.078215 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:37 crc kubenswrapper[4711]: E1202 10:15:37.078380 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:38 crc kubenswrapper[4711]: I1202 10:15:38.077815 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:38 crc kubenswrapper[4711]: E1202 10:15:38.078392 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:39 crc kubenswrapper[4711]: I1202 10:15:39.078344 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:39 crc kubenswrapper[4711]: I1202 10:15:39.078505 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:39 crc kubenswrapper[4711]: I1202 10:15:39.078344 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:39 crc kubenswrapper[4711]: E1202 10:15:39.078894 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:39 crc kubenswrapper[4711]: E1202 10:15:39.079250 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:39 crc kubenswrapper[4711]: E1202 10:15:39.079465 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:40 crc kubenswrapper[4711]: I1202 10:15:40.078167 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:40 crc kubenswrapper[4711]: E1202 10:15:40.078369 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:41 crc kubenswrapper[4711]: I1202 10:15:41.078749 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:41 crc kubenswrapper[4711]: I1202 10:15:41.078767 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:41 crc kubenswrapper[4711]: I1202 10:15:41.078753 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:41 crc kubenswrapper[4711]: E1202 10:15:41.081281 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:41 crc kubenswrapper[4711]: E1202 10:15:41.081478 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:41 crc kubenswrapper[4711]: E1202 10:15:41.081675 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:41 crc kubenswrapper[4711]: E1202 10:15:41.222769 4711 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 10:15:42 crc kubenswrapper[4711]: I1202 10:15:42.078530 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:42 crc kubenswrapper[4711]: E1202 10:15:42.078771 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:42 crc kubenswrapper[4711]: I1202 10:15:42.079544 4711 scope.go:117] "RemoveContainer" containerID="783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6" Dec 02 10:15:42 crc kubenswrapper[4711]: I1202 10:15:42.108740 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podStartSLOduration=111.10869855 podStartE2EDuration="1m51.10869855s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:33.124699423 +0000 UTC m=+122.834065880" watchObservedRunningTime="2025-12-02 10:15:42.10869855 +0000 UTC m=+131.818065047" Dec 02 10:15:43 crc kubenswrapper[4711]: I1202 10:15:43.077601 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:43 crc kubenswrapper[4711]: I1202 10:15:43.077618 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:43 crc kubenswrapper[4711]: I1202 10:15:43.077737 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:43 crc kubenswrapper[4711]: E1202 10:15:43.078473 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:43 crc kubenswrapper[4711]: E1202 10:15:43.078633 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:43 crc kubenswrapper[4711]: E1202 10:15:43.078644 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:43 crc kubenswrapper[4711]: I1202 10:15:43.108295 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4qrj7_2fab88a2-3875-44a4-a926-7c76836b51b8/kube-multus/1.log" Dec 02 10:15:43 crc kubenswrapper[4711]: I1202 10:15:43.108353 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4qrj7" event={"ID":"2fab88a2-3875-44a4-a926-7c76836b51b8","Type":"ContainerStarted","Data":"6b8753459d7fb04fe0374db1e644abb403557d98f0fa752fbe976882092f8082"} Dec 02 10:15:44 crc kubenswrapper[4711]: I1202 10:15:44.078276 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:44 crc kubenswrapper[4711]: E1202 10:15:44.078484 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:45 crc kubenswrapper[4711]: I1202 10:15:45.078501 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:45 crc kubenswrapper[4711]: I1202 10:15:45.078552 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:45 crc kubenswrapper[4711]: E1202 10:15:45.078660 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:15:45 crc kubenswrapper[4711]: I1202 10:15:45.078732 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:45 crc kubenswrapper[4711]: E1202 10:15:45.078898 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:15:45 crc kubenswrapper[4711]: E1202 10:15:45.079143 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:15:46 crc kubenswrapper[4711]: I1202 10:15:46.078209 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:46 crc kubenswrapper[4711]: E1202 10:15:46.078464 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c82q2" podUID="87347875-9865-4380-a0ea-3fde5596dce7" Dec 02 10:15:47 crc kubenswrapper[4711]: I1202 10:15:47.077906 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:47 crc kubenswrapper[4711]: I1202 10:15:47.077996 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:47 crc kubenswrapper[4711]: I1202 10:15:47.077921 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:47 crc kubenswrapper[4711]: I1202 10:15:47.080463 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 02 10:15:47 crc kubenswrapper[4711]: I1202 10:15:47.080466 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 02 10:15:47 crc kubenswrapper[4711]: I1202 10:15:47.081608 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 02 10:15:47 crc kubenswrapper[4711]: I1202 10:15:47.081660 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 02 10:15:48 crc kubenswrapper[4711]: I1202 10:15:48.078218 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:15:48 crc kubenswrapper[4711]: I1202 10:15:48.081152 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 02 10:15:48 crc kubenswrapper[4711]: I1202 10:15:48.082325 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.780694 4711 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.820759 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.821316 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.824378 4711 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.824643 4711 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.824663 4711 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.824705 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.824708 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.824585 4711 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.824787 4711 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: secrets "oauth-apiserver-sa-dockercfg-6r2bq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.824847 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-apiserver-sa-dockercfg-6r2bq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.824837 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.825359 4711 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.825387 4711 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.825419 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.825392 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.825612 4711 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.825729 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.825914 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.828508 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rc7wl"] Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.829018 4711 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.829424 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.830020 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.831527 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2c7s8"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.832444 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.833268 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-49njp"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.833906 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.835159 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.835750 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.837281 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.837744 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.839232 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5n6jf"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.839980 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.841167 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.841906 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.843241 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm"] Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.843308 4711 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.843352 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.843854 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.845059 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.846109 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.855987 4711 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.856078 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.865500 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.866269 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.866643 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.868592 4711 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.868651 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.868714 4711 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.868731 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.868776 4711 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.868794 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.868837 4711 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.868851 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.868909 4711 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: secrets "openshift-apiserver-operator-dockercfg-xtcjv" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.868924 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-dockercfg-xtcjv\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.869133 4711 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.869152 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.869197 4711 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.869211 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.869303 4711 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: secrets "samples-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.869324 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"samples-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.869396 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.869593 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.869918 4711 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.870032 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.870213 4711 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.870249 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.870346 4711 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.870330 4711 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.870405 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.870437 4711 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.870467 4711 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: secrets "cluster-samples-operator-dockercfg-xpp9w" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.870462 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.870365 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.870486 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-samples-operator-dockercfg-xpp9w\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.870520 4711 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.870543 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.870588 4711 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.870609 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.870649 4711 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.870659 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.870691 4711 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.870715 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.870783 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.870991 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.871085 4711 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.871121 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.871128 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.871367 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.871380 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.871524 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.871675 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.871761 4711 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.871777 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.871835 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.871881 4711 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.871904 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.871973 4711 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.871946 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.871988 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.872027 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.872059 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 02 10:15:53 crc kubenswrapper[4711]: W1202 10:15:53.872142 4711 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.872154 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.872210 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.872297 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.872309 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 02 10:15:53 crc kubenswrapper[4711]: E1202 10:15:53.869946 4711 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.878033 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.878308 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.878442 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.878555 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.881119 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.881428 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6sr4n"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.882146 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4m2lb"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.882588 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.882716 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.883098 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4m2lb" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.883247 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.887650 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6gz75"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.888495 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.888586 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-g2lxx"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.889328 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.890098 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.890538 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.891008 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.891028 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.891032 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.891248 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.891313 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.891367 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.891316 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.891519 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.891885 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.891927 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.892087 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.893064 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.893122 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jbwvc"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.893182 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.893322 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.893671 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.893843 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jbwvc" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.894392 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.893728 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jc7xv"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.893800 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.895119 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.895222 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.895262 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.895369 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.895702 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.895779 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.896055 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vqg5c"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.896420 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.906671 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.907099 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-zrj4j"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.907633 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.908178 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.908252 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.908300 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.908326 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.908531 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.908859 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.910207 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.908588 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.910630 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.910785 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.910945 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.910822 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.910825 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.910857 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.910868 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.911015 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.916467 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.916848 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.917067 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.918533 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rc7wl"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.922996 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.925274 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.926381 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.932995 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.937911 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.969127 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.969649 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.970024 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wrrqk"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.970454 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wrrqk" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.970660 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.970800 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.972926 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.973678 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974389 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9991ec66-70eb-4442-9e35-34e05d7c0dfd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fsfqp\" (UID: \"9991ec66-70eb-4442-9e35-34e05d7c0dfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974438 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fc9623a-e271-4424-bb04-a0a502c81a8a-etcd-client\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974456 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72175ecc-cd0d-451c-a30d-59962898bec9-serving-cert\") pod \"openshift-config-operator-7777fb866f-bs6xm\" (UID: \"72175ecc-cd0d-451c-a30d-59962898bec9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974471 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fc9623a-e271-4424-bb04-a0a502c81a8a-config\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974488 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9991ec66-70eb-4442-9e35-34e05d7c0dfd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fsfqp\" (UID: \"9991ec66-70eb-4442-9e35-34e05d7c0dfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974508 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/764e4272-f3e2-4a3f-a390-7929850c7150-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2rvdj\" (UID: \"764e4272-f3e2-4a3f-a390-7929850c7150\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974524 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fc9623a-e271-4424-bb04-a0a502c81a8a-serving-cert\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974539 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5fc9623a-e271-4424-bb04-a0a502c81a8a-etcd-ca\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974559 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/72175ecc-cd0d-451c-a30d-59962898bec9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bs6xm\" (UID: \"72175ecc-cd0d-451c-a30d-59962898bec9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974581 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slwbr\" (UniqueName: \"kubernetes.io/projected/9991ec66-70eb-4442-9e35-34e05d7c0dfd-kube-api-access-slwbr\") pod \"openshift-controller-manager-operator-756b6f6bc6-fsfqp\" (UID: \"9991ec66-70eb-4442-9e35-34e05d7c0dfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974595 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psp6c\" (UniqueName: \"kubernetes.io/projected/5fc9623a-e271-4424-bb04-a0a502c81a8a-kube-api-access-psp6c\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974615 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmkcs\" (UniqueName: \"kubernetes.io/projected/764e4272-f3e2-4a3f-a390-7929850c7150-kube-api-access-qmkcs\") pod \"cluster-samples-operator-665b6dd947-2rvdj\" (UID: \"764e4272-f3e2-4a3f-a390-7929850c7150\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974643 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7n4\" (UniqueName: \"kubernetes.io/projected/72175ecc-cd0d-451c-a30d-59962898bec9-kube-api-access-2t7n4\") pod \"openshift-config-operator-7777fb866f-bs6xm\" (UID: \"72175ecc-cd0d-451c-a30d-59962898bec9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.974667 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5fc9623a-e271-4424-bb04-a0a502c81a8a-etcd-service-ca\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.977207 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.977269 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.977725 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m5tws"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.977787 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.978407 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n9qbk"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.978469 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.979146 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.979554 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.979888 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.980466 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.980579 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.980467 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.982970 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.990739 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9hk72"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.991111 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.991679 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9hk72" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.993233 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-22ngp"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.995309 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx"] Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.995845 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" Dec 02 10:15:53 crc kubenswrapper[4711]: I1202 10:15:53.996168 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.002650 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.003037 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.003323 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.005293 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.012381 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.012886 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.013048 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.019859 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.020867 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.020903 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.022303 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qzmsv"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.023430 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qzmsv" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.023803 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.023997 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.025221 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.030159 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.030235 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.030252 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-49njp"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.033313 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6sr4n"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.035553 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2c7s8"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.037314 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6gz75"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.038929 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g2lxx"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.043015 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.046153 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.048911 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.051227 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wrrqk"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.052937 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.054535 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jc7xv"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.055330 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4m2lb"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.057911 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.062461 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.062864 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.064077 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jbwvc"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.065265 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.066335 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5n6jf"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.067382 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.068519 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n9qbk"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.070146 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.071258 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.073063 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vqg5c"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.073769 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075024 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m5tws"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075375 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9991ec66-70eb-4442-9e35-34e05d7c0dfd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fsfqp\" (UID: \"9991ec66-70eb-4442-9e35-34e05d7c0dfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075405 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fc9623a-e271-4424-bb04-a0a502c81a8a-etcd-client\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075424 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fc9623a-e271-4424-bb04-a0a502c81a8a-config\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075440 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72175ecc-cd0d-451c-a30d-59962898bec9-serving-cert\") pod \"openshift-config-operator-7777fb866f-bs6xm\" (UID: \"72175ecc-cd0d-451c-a30d-59962898bec9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075458 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9991ec66-70eb-4442-9e35-34e05d7c0dfd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fsfqp\" (UID: \"9991ec66-70eb-4442-9e35-34e05d7c0dfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075475 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/764e4272-f3e2-4a3f-a390-7929850c7150-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2rvdj\" (UID: \"764e4272-f3e2-4a3f-a390-7929850c7150\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075491 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fc9623a-e271-4424-bb04-a0a502c81a8a-serving-cert\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075507 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5fc9623a-e271-4424-bb04-a0a502c81a8a-etcd-ca\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075522 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/72175ecc-cd0d-451c-a30d-59962898bec9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bs6xm\" (UID: \"72175ecc-cd0d-451c-a30d-59962898bec9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075546 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slwbr\" (UniqueName: \"kubernetes.io/projected/9991ec66-70eb-4442-9e35-34e05d7c0dfd-kube-api-access-slwbr\") pod \"openshift-controller-manager-operator-756b6f6bc6-fsfqp\" (UID: \"9991ec66-70eb-4442-9e35-34e05d7c0dfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075562 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psp6c\" (UniqueName: \"kubernetes.io/projected/5fc9623a-e271-4424-bb04-a0a502c81a8a-kube-api-access-psp6c\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075582 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmkcs\" (UniqueName: \"kubernetes.io/projected/764e4272-f3e2-4a3f-a390-7929850c7150-kube-api-access-qmkcs\") pod \"cluster-samples-operator-665b6dd947-2rvdj\" (UID: \"764e4272-f3e2-4a3f-a390-7929850c7150\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075599 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7n4\" (UniqueName: \"kubernetes.io/projected/72175ecc-cd0d-451c-a30d-59962898bec9-kube-api-access-2t7n4\") pod \"openshift-config-operator-7777fb866f-bs6xm\" (UID: \"72175ecc-cd0d-451c-a30d-59962898bec9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.075635 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5fc9623a-e271-4424-bb04-a0a502c81a8a-etcd-service-ca\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.076088 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/72175ecc-cd0d-451c-a30d-59962898bec9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bs6xm\" (UID: \"72175ecc-cd0d-451c-a30d-59962898bec9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.076087 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9991ec66-70eb-4442-9e35-34e05d7c0dfd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fsfqp\" (UID: \"9991ec66-70eb-4442-9e35-34e05d7c0dfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.076905 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9hk72"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.078461 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.080372 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.080598 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-n54fh"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.081210 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n54fh" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.081771 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mrbgr"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.081836 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72175ecc-cd0d-451c-a30d-59962898bec9-serving-cert\") pod \"openshift-config-operator-7777fb866f-bs6xm\" (UID: \"72175ecc-cd0d-451c-a30d-59962898bec9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.082677 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.082836 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qzmsv"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.082943 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.083932 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-22ngp"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.084969 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n54fh"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.086176 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.087345 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.088422 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.089445 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mrbgr"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.090473 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qp5kk"] Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.091078 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qp5kk" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.094925 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9991ec66-70eb-4442-9e35-34e05d7c0dfd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fsfqp\" (UID: \"9991ec66-70eb-4442-9e35-34e05d7c0dfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.102980 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.108500 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fc9623a-e271-4424-bb04-a0a502c81a8a-serving-cert\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.123076 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.128836 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fc9623a-e271-4424-bb04-a0a502c81a8a-etcd-client\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.143496 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.146817 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fc9623a-e271-4424-bb04-a0a502c81a8a-config\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.163642 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.166187 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5fc9623a-e271-4424-bb04-a0a502c81a8a-etcd-service-ca\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.184224 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.186379 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5fc9623a-e271-4424-bb04-a0a502c81a8a-etcd-ca\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.223625 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.243044 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.263292 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.283116 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.303335 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.323381 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.343629 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.363278 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.382705 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.403842 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.423490 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.444354 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.463614 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.482595 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.503499 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.524260 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.544289 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.564122 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.604601 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.623595 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.643740 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.664538 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.684564 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.704497 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.725011 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.753971 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.764137 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.783418 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.805524 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.824096 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.844429 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.864412 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.884366 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.903845 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.925074 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.943585 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.963461 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.981931 4711 request.go:700] Waited for 1.003077906s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0 Dec 02 10:15:54 crc kubenswrapper[4711]: I1202 10:15:54.983161 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.004389 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.030932 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.044274 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.063468 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 02 10:15:55 crc kubenswrapper[4711]: E1202 10:15:55.076354 4711 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Dec 02 10:15:55 crc kubenswrapper[4711]: E1202 10:15:55.076750 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/764e4272-f3e2-4a3f-a390-7929850c7150-samples-operator-tls podName:764e4272-f3e2-4a3f-a390-7929850c7150 nodeName:}" failed. No retries permitted until 2025-12-02 10:15:55.576705518 +0000 UTC m=+145.286072015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/764e4272-f3e2-4a3f-a390-7929850c7150-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-2rvdj" (UID: "764e4272-f3e2-4a3f-a390-7929850c7150") : failed to sync secret cache: timed out waiting for the condition Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.083228 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.103634 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.124195 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.143201 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.163714 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.184650 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.204039 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.223436 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.243870 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.263408 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.283593 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.304029 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.324697 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.343547 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.364315 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.388645 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.404331 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.424049 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.443297 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.464920 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.484019 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.504284 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.525378 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.544786 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.564134 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.584085 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.596404 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/764e4272-f3e2-4a3f-a390-7929850c7150-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2rvdj\" (UID: \"764e4272-f3e2-4a3f-a390-7929850c7150\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.604123 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.623663 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.643725 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.664070 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.723575 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psp6c\" (UniqueName: \"kubernetes.io/projected/5fc9623a-e271-4424-bb04-a0a502c81a8a-kube-api-access-psp6c\") pod \"etcd-operator-b45778765-vqg5c\" (UID: \"5fc9623a-e271-4424-bb04-a0a502c81a8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.744540 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slwbr\" (UniqueName: \"kubernetes.io/projected/9991ec66-70eb-4442-9e35-34e05d7c0dfd-kube-api-access-slwbr\") pod \"openshift-controller-manager-operator-756b6f6bc6-fsfqp\" (UID: \"9991ec66-70eb-4442-9e35-34e05d7c0dfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.762947 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.763306 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7n4\" (UniqueName: \"kubernetes.io/projected/72175ecc-cd0d-451c-a30d-59962898bec9-kube-api-access-2t7n4\") pod \"openshift-config-operator-7777fb866f-bs6xm\" (UID: \"72175ecc-cd0d-451c-a30d-59962898bec9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.783581 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.791467 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.804581 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.818496 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.823206 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.844344 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.865377 4711 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.867977 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.884211 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.903611 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.924495 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.943617 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.982387 4711 request.go:700] Waited for 1.366354531s due to client-side throttling, not priority and fairness, request: PATCH:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-b45778765-vqg5c/status Dec 02 10:15:55 crc kubenswrapper[4711]: I1202 10:15:55.996633 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp"] Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001613 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5f3363-25e9-4f5b-94ed-843a17d17997-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5n6jf\" (UID: \"4d5f3363-25e9-4f5b-94ed-843a17d17997\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001636 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001654 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv598\" (UniqueName: \"kubernetes.io/projected/a996dfa5-84ad-41e6-aee0-ed17df150b5b-kube-api-access-xv598\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001672 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001689 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001715 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd1e0117-926f-4673-ac70-b25dc56e7403-encryption-config\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001744 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001759 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001774 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2lkk\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-kube-api-access-w2lkk\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001789 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001862 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919-config\") pod \"kube-controller-manager-operator-78b949d7b-phj8b\" (UID: \"98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001897 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b9bbe773-63b6-490e-a058-a12050a40b4a-node-pullsecrets\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001912 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001943 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/34333510-8dc2-45c2-9c08-013bdb2bcd85-stats-auth\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.001984 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a996dfa5-84ad-41e6-aee0-ed17df150b5b-serving-cert\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002009 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-etcd-serving-ca\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002032 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a996dfa5-84ad-41e6-aee0-ed17df150b5b-service-ca-bundle\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002054 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a996dfa5-84ad-41e6-aee0-ed17df150b5b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002093 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxpdb\" (UniqueName: \"kubernetes.io/projected/067e3491-3d3c-4bc6-a164-9093f895fbcf-kube-api-access-vxpdb\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002110 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-audit-policies\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002132 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91034075-af22-4fae-8684-a8914596c1ac-serving-cert\") pod \"console-operator-58897d9998-6gz75\" (UID: \"91034075-af22-4fae-8684-a8914596c1ac\") " pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002148 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8mwl\" (UniqueName: \"kubernetes.io/projected/34333510-8dc2-45c2-9c08-013bdb2bcd85-kube-api-access-r8mwl\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002163 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067e3491-3d3c-4bc6-a164-9093f895fbcf-serving-cert\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002181 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e365db0-c1ec-415c-8310-02e222ac80c1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wpzpm\" (UID: \"2e365db0-c1ec-415c-8310-02e222ac80c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.002194 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:56.502180137 +0000 UTC m=+146.211546584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002216 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd1e0117-926f-4673-ac70-b25dc56e7403-audit-policies\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002236 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-trusted-ca-bundle\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002253 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002270 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea1baa74-f09b-497d-a9df-d73953bf8a22-metrics-tls\") pod \"dns-operator-744455d44c-jbwvc\" (UID: \"ea1baa74-f09b-497d-a9df-d73953bf8a22\") " pod="openshift-dns-operator/dns-operator-744455d44c-jbwvc" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002289 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4d5f3363-25e9-4f5b-94ed-843a17d17997-images\") pod \"machine-api-operator-5694c8668f-5n6jf\" (UID: \"4d5f3363-25e9-4f5b-94ed-843a17d17997\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002307 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b9bbe773-63b6-490e-a058-a12050a40b4a-etcd-client\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002398 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a996dfa5-84ad-41e6-aee0-ed17df150b5b-config\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002416 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-audit\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002433 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b49lf\" (UniqueName: \"kubernetes.io/projected/53da86b0-43ce-4526-97db-a82df759ef58-kube-api-access-b49lf\") pod \"route-controller-manager-6576b87f9c-qjz2m\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002477 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j85g\" (UniqueName: \"kubernetes.io/projected/b9bbe773-63b6-490e-a058-a12050a40b4a-kube-api-access-9j85g\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002498 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbw6g\" (UniqueName: \"kubernetes.io/projected/ea1baa74-f09b-497d-a9df-d73953bf8a22-kube-api-access-zbw6g\") pod \"dns-operator-744455d44c-jbwvc\" (UID: \"ea1baa74-f09b-497d-a9df-d73953bf8a22\") " pod="openshift-dns-operator/dns-operator-744455d44c-jbwvc" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002532 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd1e0117-926f-4673-ac70-b25dc56e7403-audit-dir\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002551 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwzlg\" (UniqueName: \"kubernetes.io/projected/8c1f70ef-1183-4621-bb91-ffe2d31fa391-kube-api-access-cwzlg\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002579 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b9bbe773-63b6-490e-a058-a12050a40b4a-encryption-config\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002596 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-oauth-config\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002610 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-audit-dir\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002638 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d5f3363-25e9-4f5b-94ed-843a17d17997-config\") pod \"machine-api-operator-5694c8668f-5n6jf\" (UID: \"4d5f3363-25e9-4f5b-94ed-843a17d17997\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002727 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-config\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002770 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgsns\" (UniqueName: \"kubernetes.io/projected/2e365db0-c1ec-415c-8310-02e222ac80c1-kube-api-access-mgsns\") pod \"cluster-image-registry-operator-dc59b4c8b-wpzpm\" (UID: \"2e365db0-c1ec-415c-8310-02e222ac80c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002801 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-trusted-ca\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002820 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-serving-cert\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002862 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-registry-certificates\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002896 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh5sg\" (UniqueName: \"kubernetes.io/projected/585fe769-d9ad-42f1-8cb6-29904018f637-kube-api-access-kh5sg\") pod \"downloads-7954f5f757-4m2lb\" (UID: \"585fe769-d9ad-42f1-8cb6-29904018f637\") " pod="openshift-console/downloads-7954f5f757-4m2lb" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002925 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002942 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-registry-tls\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002983 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlvxs\" (UniqueName: \"kubernetes.io/projected/4d5f3363-25e9-4f5b-94ed-843a17d17997-kube-api-access-hlvxs\") pod \"machine-api-operator-5694c8668f-5n6jf\" (UID: \"4d5f3363-25e9-4f5b-94ed-843a17d17997\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.002999 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-image-import-ca\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003015 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2367c432-3e23-436c-aff6-31e1c32f8809-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6cr5c\" (UID: \"2367c432-3e23-436c-aff6-31e1c32f8809\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003030 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91034075-af22-4fae-8684-a8914596c1ac-config\") pod \"console-operator-58897d9998-6gz75\" (UID: \"91034075-af22-4fae-8684-a8914596c1ac\") " pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003043 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9bbe773-63b6-490e-a058-a12050a40b4a-serving-cert\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003058 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mf2k\" (UniqueName: \"kubernetes.io/projected/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-kube-api-access-8mf2k\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003071 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-config\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003084 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-service-ca\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003099 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003123 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003138 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8tcc\" (UniqueName: \"kubernetes.io/projected/2367c432-3e23-436c-aff6-31e1c32f8809-kube-api-access-b8tcc\") pod \"openshift-apiserver-operator-796bbdcf4f-6cr5c\" (UID: \"2367c432-3e23-436c-aff6-31e1c32f8809\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003155 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt6jm\" (UniqueName: \"kubernetes.io/projected/91034075-af22-4fae-8684-a8914596c1ac-kube-api-access-mt6jm\") pod \"console-operator-58897d9998-6gz75\" (UID: \"91034075-af22-4fae-8684-a8914596c1ac\") " pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003172 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53da86b0-43ce-4526-97db-a82df759ef58-client-ca\") pod \"route-controller-manager-6576b87f9c-qjz2m\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003234 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003256 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34333510-8dc2-45c2-9c08-013bdb2bcd85-service-ca-bundle\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003273 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34333510-8dc2-45c2-9c08-013bdb2bcd85-metrics-certs\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003293 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-config\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003321 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53da86b0-43ce-4526-97db-a82df759ef58-serving-cert\") pod \"route-controller-manager-6576b87f9c-qjz2m\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003341 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003382 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53da86b0-43ce-4526-97db-a82df759ef58-config\") pod \"route-controller-manager-6576b87f9c-qjz2m\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003406 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003424 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2367c432-3e23-436c-aff6-31e1c32f8809-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6cr5c\" (UID: \"2367c432-3e23-436c-aff6-31e1c32f8809\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003449 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-bound-sa-token\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003469 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd1e0117-926f-4673-ac70-b25dc56e7403-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003529 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91034075-af22-4fae-8684-a8914596c1ac-trusted-ca\") pod \"console-operator-58897d9998-6gz75\" (UID: \"91034075-af22-4fae-8684-a8914596c1ac\") " pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003549 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-client-ca\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003572 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd1e0117-926f-4673-ac70-b25dc56e7403-etcd-client\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003609 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-phj8b\" (UID: \"98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003650 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/34333510-8dc2-45c2-9c08-013bdb2bcd85-default-certificate\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003664 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b9bbe773-63b6-490e-a058-a12050a40b4a-audit-dir\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003704 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-oauth-serving-cert\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003722 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003754 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e365db0-c1ec-415c-8310-02e222ac80c1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wpzpm\" (UID: \"2e365db0-c1ec-415c-8310-02e222ac80c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003773 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc49d\" (UniqueName: \"kubernetes.io/projected/fd1e0117-926f-4673-ac70-b25dc56e7403-kube-api-access-bc49d\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003790 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003806 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003846 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-phj8b\" (UID: \"98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003869 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e365db0-c1ec-415c-8310-02e222ac80c1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wpzpm\" (UID: \"2e365db0-c1ec-415c-8310-02e222ac80c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003899 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd1e0117-926f-4673-ac70-b25dc56e7403-serving-cert\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.003915 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd1e0117-926f-4673-ac70-b25dc56e7403-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: W1202 10:15:56.011206 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9991ec66_70eb_4442_9e35_34e05d7c0dfd.slice/crio-e1aa391e5e6f4251c105724afa6d4525e50fe953768c2f85e7973f4e3044c940 WatchSource:0}: Error finding container e1aa391e5e6f4251c105724afa6d4525e50fe953768c2f85e7973f4e3044c940: Status 404 returned error can't find the container with id e1aa391e5e6f4251c105724afa6d4525e50fe953768c2f85e7973f4e3044c940 Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.016711 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm"] Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.023765 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 02 10:15:56 crc kubenswrapper[4711]: W1202 10:15:56.024866 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72175ecc_cd0d_451c_a30d_59962898bec9.slice/crio-a2dac00ea6ffb144ea82f761d96c78672eab9b57a043c7e7ef981e506bf1732e WatchSource:0}: Error finding container a2dac00ea6ffb144ea82f761d96c78672eab9b57a043c7e7ef981e506bf1732e: Status 404 returned error can't find the container with id a2dac00ea6ffb144ea82f761d96c78672eab9b57a043c7e7ef981e506bf1732e Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.043676 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.064751 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.078666 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vqg5c"] Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.084028 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 02 10:15:56 crc kubenswrapper[4711]: W1202 10:15:56.086798 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fc9623a_e271_4424_bb04_a0a502c81a8a.slice/crio-1a74a0ec29be27939e6fa2d677015f41619924cfa67c82464af5e4aab01f24e8 WatchSource:0}: Error finding container 1a74a0ec29be27939e6fa2d677015f41619924cfa67c82464af5e4aab01f24e8: Status 404 returned error can't find the container with id 1a74a0ec29be27939e6fa2d677015f41619924cfa67c82464af5e4aab01f24e8 Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.103195 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105064 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.105272 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:56.605225397 +0000 UTC m=+146.314591844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105333 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34333510-8dc2-45c2-9c08-013bdb2bcd85-service-ca-bundle\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105378 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105409 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/334bed7f-2766-4ef7-9eee-9312c1453bbb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9ptzk\" (UID: \"334bed7f-2766-4ef7-9eee-9312c1453bbb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105436 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3bc47fe-80a3-4a18-bada-6eebb2abff12-apiservice-cert\") pod \"packageserver-d55dfcdfc-l54rr\" (UID: \"b3bc47fe-80a3-4a18-bada-6eebb2abff12\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105460 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h68g\" (UniqueName: \"kubernetes.io/projected/b3bc47fe-80a3-4a18-bada-6eebb2abff12-kube-api-access-7h68g\") pod \"packageserver-d55dfcdfc-l54rr\" (UID: \"b3bc47fe-80a3-4a18-bada-6eebb2abff12\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105534 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53da86b0-43ce-4526-97db-a82df759ef58-config\") pod \"route-controller-manager-6576b87f9c-qjz2m\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105601 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2367c432-3e23-436c-aff6-31e1c32f8809-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6cr5c\" (UID: \"2367c432-3e23-436c-aff6-31e1c32f8809\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105633 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd1e0117-926f-4673-ac70-b25dc56e7403-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105666 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf74s\" (UniqueName: \"kubernetes.io/projected/dadea7f6-d37c-458a-9f16-4d3477f4b6f3-kube-api-access-sf74s\") pod \"kube-storage-version-migrator-operator-b67b599dd-5lktx\" (UID: \"dadea7f6-d37c-458a-9f16-4d3477f4b6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105713 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd1e0117-926f-4673-ac70-b25dc56e7403-etcd-client\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105746 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wptz\" (UniqueName: \"kubernetes.io/projected/bf7b0fe2-c9dc-4486-926c-0432cfc03172-kube-api-access-4wptz\") pod \"ingress-canary-n54fh\" (UID: \"bf7b0fe2-c9dc-4486-926c-0432cfc03172\") " pod="openshift-ingress-canary/ingress-canary-n54fh" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105769 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-phj8b\" (UID: \"98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105786 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94ac6b65-651b-4461-90de-62a5987b52e0-images\") pod \"machine-config-operator-74547568cd-2mmnj\" (UID: \"94ac6b65-651b-4461-90de-62a5987b52e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105802 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf7b0fe2-c9dc-4486-926c-0432cfc03172-cert\") pod \"ingress-canary-n54fh\" (UID: \"bf7b0fe2-c9dc-4486-926c-0432cfc03172\") " pod="openshift-ingress-canary/ingress-canary-n54fh" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105818 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-config-volume\") pod \"collect-profiles-29411175-prrng\" (UID: \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105836 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/140261a1-9b1a-4829-8926-6ddaaaf55503-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hvqhn\" (UID: \"140261a1-9b1a-4829-8926-6ddaaaf55503\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105858 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.105939 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b69aaa1-6c16-4417-a703-281e816a8642-config\") pod \"service-ca-operator-777779d784-22ngp\" (UID: \"5b69aaa1-6c16-4417-a703-281e816a8642\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106331 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e365db0-c1ec-415c-8310-02e222ac80c1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wpzpm\" (UID: \"2e365db0-c1ec-415c-8310-02e222ac80c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106363 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd1e0117-926f-4673-ac70-b25dc56e7403-serving-cert\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106388 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd1e0117-926f-4673-ac70-b25dc56e7403-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106414 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5f3363-25e9-4f5b-94ed-843a17d17997-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5n6jf\" (UID: \"4d5f3363-25e9-4f5b-94ed-843a17d17997\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106441 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106474 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106498 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106519 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106554 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919-config\") pod \"kube-controller-manager-operator-78b949d7b-phj8b\" (UID: \"98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106567 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34333510-8dc2-45c2-9c08-013bdb2bcd85-service-ca-bundle\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106816 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106579 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106873 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/80d9d16b-8c37-4d97-8179-12909d9c2f53-srv-cert\") pod \"olm-operator-6b444d44fb-47pfg\" (UID: \"80d9d16b-8c37-4d97-8179-12909d9c2f53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106910 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/34333510-8dc2-45c2-9c08-013bdb2bcd85-stats-auth\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106929 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4sbm\" (UniqueName: \"kubernetes.io/projected/eb441657-98d8-4fbe-9f7f-50c64f1414ba-kube-api-access-f4sbm\") pod \"migrator-59844c95c7-wrrqk\" (UID: \"eb441657-98d8-4fbe-9f7f-50c64f1414ba\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wrrqk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106962 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a996dfa5-84ad-41e6-aee0-ed17df150b5b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106980 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lz9s\" (UniqueName: \"kubernetes.io/projected/8f8ce2ea-2335-474b-97c9-1108e2157a2b-kube-api-access-4lz9s\") pod \"service-ca-9c57cc56f-n9qbk\" (UID: \"8f8ce2ea-2335-474b-97c9-1108e2157a2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.106998 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvnf5\" (UniqueName: \"kubernetes.io/projected/80ff1a4d-6503-4de6-b29a-d5fcfbefc729-kube-api-access-jvnf5\") pod \"ingress-operator-5b745b69d9-st4hg\" (UID: \"80ff1a4d-6503-4de6-b29a-d5fcfbefc729\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107016 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mskn\" (UniqueName: \"kubernetes.io/projected/5b69aaa1-6c16-4417-a703-281e816a8642-kube-api-access-2mskn\") pod \"service-ca-operator-777779d784-22ngp\" (UID: \"5b69aaa1-6c16-4417-a703-281e816a8642\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107050 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d813713-0bc0-4865-a7a2-dcf1a15ae12f-config-volume\") pod \"dns-default-qzmsv\" (UID: \"4d813713-0bc0-4865-a7a2-dcf1a15ae12f\") " pod="openshift-dns/dns-default-qzmsv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107072 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dadea7f6-d37c-458a-9f16-4d3477f4b6f3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5lktx\" (UID: \"dadea7f6-d37c-458a-9f16-4d3477f4b6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107101 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxpdb\" (UniqueName: \"kubernetes.io/projected/067e3491-3d3c-4bc6-a164-9093f895fbcf-kube-api-access-vxpdb\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107117 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107133 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8mwl\" (UniqueName: \"kubernetes.io/projected/34333510-8dc2-45c2-9c08-013bdb2bcd85-kube-api-access-r8mwl\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107151 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067e3491-3d3c-4bc6-a164-9093f895fbcf-serving-cert\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107166 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd1e0117-926f-4673-ac70-b25dc56e7403-audit-policies\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107184 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-trusted-ca-bundle\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107199 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b9bbe773-63b6-490e-a058-a12050a40b4a-etcd-client\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107213 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a996dfa5-84ad-41e6-aee0-ed17df150b5b-config\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107228 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d228ed-836d-4316-b404-a281ae332a8a-config\") pod \"machine-approver-56656f9798-xtx7c\" (UID: \"90d228ed-836d-4316-b404-a281ae332a8a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107243 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dbab21cc-3581-4de0-911c-6baad4d03087-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9hk72\" (UID: \"dbab21cc-3581-4de0-911c-6baad4d03087\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9hk72" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107259 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-registration-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107276 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-audit\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107291 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b49lf\" (UniqueName: \"kubernetes.io/projected/53da86b0-43ce-4526-97db-a82df759ef58-kube-api-access-b49lf\") pod \"route-controller-manager-6576b87f9c-qjz2m\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107307 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd1e0117-926f-4673-ac70-b25dc56e7403-audit-dir\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107323 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a26a2431-9c9a-48e9-9797-bf9ac466fc2f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jcqjc\" (UID: \"a26a2431-9c9a-48e9-9797-bf9ac466fc2f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107340 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-oauth-config\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107365 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d5f3363-25e9-4f5b-94ed-843a17d17997-config\") pod \"machine-api-operator-5694c8668f-5n6jf\" (UID: \"4d5f3363-25e9-4f5b-94ed-843a17d17997\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107382 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-plugins-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107411 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8f8ce2ea-2335-474b-97c9-1108e2157a2b-signing-cabundle\") pod \"service-ca-9c57cc56f-n9qbk\" (UID: \"8f8ce2ea-2335-474b-97c9-1108e2157a2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107427 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-registry-certificates\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107444 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh5sg\" (UniqueName: \"kubernetes.io/projected/585fe769-d9ad-42f1-8cb6-29904018f637-kube-api-access-kh5sg\") pod \"downloads-7954f5f757-4m2lb\" (UID: \"585fe769-d9ad-42f1-8cb6-29904018f637\") " pod="openshift-console/downloads-7954f5f757-4m2lb" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107460 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80ff1a4d-6503-4de6-b29a-d5fcfbefc729-bound-sa-token\") pod \"ingress-operator-5b745b69d9-st4hg\" (UID: \"80ff1a4d-6503-4de6-b29a-d5fcfbefc729\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107478 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-registry-tls\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107494 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlvxs\" (UniqueName: \"kubernetes.io/projected/4d5f3363-25e9-4f5b-94ed-843a17d17997-kube-api-access-hlvxs\") pod \"machine-api-operator-5694c8668f-5n6jf\" (UID: \"4d5f3363-25e9-4f5b-94ed-843a17d17997\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107537 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-image-import-ca\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107579 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2367c432-3e23-436c-aff6-31e1c32f8809-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6cr5c\" (UID: \"2367c432-3e23-436c-aff6-31e1c32f8809\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107598 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-service-ca\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107616 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8tcc\" (UniqueName: \"kubernetes.io/projected/2367c432-3e23-436c-aff6-31e1c32f8809-kube-api-access-b8tcc\") pod \"openshift-apiserver-operator-796bbdcf4f-6cr5c\" (UID: \"2367c432-3e23-436c-aff6-31e1c32f8809\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107632 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/140261a1-9b1a-4829-8926-6ddaaaf55503-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hvqhn\" (UID: \"140261a1-9b1a-4829-8926-6ddaaaf55503\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107648 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w52lp\" (UniqueName: \"kubernetes.io/projected/47aad68f-ce63-48d4-8fff-59513ec9f7b6-kube-api-access-w52lp\") pod \"machine-config-server-qp5kk\" (UID: \"47aad68f-ce63-48d4-8fff-59513ec9f7b6\") " pod="openshift-machine-config-operator/machine-config-server-qp5kk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107666 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53da86b0-43ce-4526-97db-a82df759ef58-client-ca\") pod \"route-controller-manager-6576b87f9c-qjz2m\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107687 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107704 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34333510-8dc2-45c2-9c08-013bdb2bcd85-metrics-certs\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107722 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-config\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107739 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53da86b0-43ce-4526-97db-a82df759ef58-serving-cert\") pod \"route-controller-manager-6576b87f9c-qjz2m\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107755 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq98b\" (UniqueName: \"kubernetes.io/projected/4d813713-0bc0-4865-a7a2-dcf1a15ae12f-kube-api-access-bq98b\") pod \"dns-default-qzmsv\" (UID: \"4d813713-0bc0-4865-a7a2-dcf1a15ae12f\") " pod="openshift-dns/dns-default-qzmsv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107761 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e365db0-c1ec-415c-8310-02e222ac80c1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wpzpm\" (UID: \"2e365db0-c1ec-415c-8310-02e222ac80c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107772 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107787 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d813713-0bc0-4865-a7a2-dcf1a15ae12f-metrics-tls\") pod \"dns-default-qzmsv\" (UID: \"4d813713-0bc0-4865-a7a2-dcf1a15ae12f\") " pod="openshift-dns/dns-default-qzmsv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107804 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl9sx\" (UniqueName: \"kubernetes.io/projected/28ac9884-97cd-4efb-90f1-b742a5a9a519-kube-api-access-nl9sx\") pod \"machine-config-controller-84d6567774-z6sld\" (UID: \"28ac9884-97cd-4efb-90f1-b742a5a9a519\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107819 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-bound-sa-token\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107835 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jctq\" (UniqueName: \"kubernetes.io/projected/8e8230b2-fb50-43c7-8a69-af1d02cce895-kube-api-access-6jctq\") pod \"marketplace-operator-79b997595-m5tws\" (UID: \"8e8230b2-fb50-43c7-8a69-af1d02cce895\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107852 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/80d9d16b-8c37-4d97-8179-12909d9c2f53-profile-collector-cert\") pod \"olm-operator-6b444d44fb-47pfg\" (UID: \"80d9d16b-8c37-4d97-8179-12909d9c2f53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.107872 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:56.607854969 +0000 UTC m=+146.317221426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107927 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91034075-af22-4fae-8684-a8914596c1ac-trusted-ca\") pod \"console-operator-58897d9998-6gz75\" (UID: \"91034075-af22-4fae-8684-a8914596c1ac\") " pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.107983 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-client-ca\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.108032 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadea7f6-d37c-458a-9f16-4d3477f4b6f3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5lktx\" (UID: \"dadea7f6-d37c-458a-9f16-4d3477f4b6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.108075 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80ff1a4d-6503-4de6-b29a-d5fcfbefc729-trusted-ca\") pod \"ingress-operator-5b745b69d9-st4hg\" (UID: \"80ff1a4d-6503-4de6-b29a-d5fcfbefc729\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.108100 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7nqn\" (UniqueName: \"kubernetes.io/projected/f6b3df69-76ea-4424-8d50-b2646cf2cd0e-kube-api-access-l7nqn\") pod \"control-plane-machine-set-operator-78cbb6b69f-7kmjp\" (UID: \"f6b3df69-76ea-4424-8d50-b2646cf2cd0e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.108129 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/34333510-8dc2-45c2-9c08-013bdb2bcd85-default-certificate\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.108153 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b9bbe773-63b6-490e-a058-a12050a40b4a-audit-dir\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.108163 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.108177 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80ff1a4d-6503-4de6-b29a-d5fcfbefc729-metrics-tls\") pod \"ingress-operator-5b745b69d9-st4hg\" (UID: \"80ff1a4d-6503-4de6-b29a-d5fcfbefc729\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.108727 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.109608 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-service-ca\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.109655 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a996dfa5-84ad-41e6-aee0-ed17df150b5b-config\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.109685 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a996dfa5-84ad-41e6-aee0-ed17df150b5b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.109689 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b9bbe773-63b6-490e-a058-a12050a40b4a-audit-dir\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.110616 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-image-import-ca\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.110889 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53da86b0-43ce-4526-97db-a82df759ef58-client-ca\") pod \"route-controller-manager-6576b87f9c-qjz2m\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.110970 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b3bc47fe-80a3-4a18-bada-6eebb2abff12-tmpfs\") pod \"packageserver-d55dfcdfc-l54rr\" (UID: \"b3bc47fe-80a3-4a18-bada-6eebb2abff12\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.111019 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-oauth-serving-cert\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.111044 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.111071 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/43cb1081-b058-41a4-a47c-e106aa0d2e41-profile-collector-cert\") pod \"catalog-operator-68c6474976-g9jnr\" (UID: \"43cb1081-b058-41a4-a47c-e106aa0d2e41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.111100 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e365db0-c1ec-415c-8310-02e222ac80c1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wpzpm\" (UID: \"2e365db0-c1ec-415c-8310-02e222ac80c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.111125 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc49d\" (UniqueName: \"kubernetes.io/projected/fd1e0117-926f-4673-ac70-b25dc56e7403-kube-api-access-bc49d\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.111149 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.111175 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-phj8b\" (UID: \"98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.111188 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-trusted-ca-bundle\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.111198 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.111222 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv598\" (UniqueName: \"kubernetes.io/projected/a996dfa5-84ad-41e6-aee0-ed17df150b5b-kube-api-access-xv598\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.111470 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91034075-af22-4fae-8684-a8914596c1ac-trusted-ca\") pod \"console-operator-58897d9998-6gz75\" (UID: \"91034075-af22-4fae-8684-a8914596c1ac\") " pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.112222 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.112215 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd1e0117-926f-4673-ac70-b25dc56e7403-audit-dir\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.112713 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.113061 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.113093 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd1e0117-926f-4673-ac70-b25dc56e7403-encryption-config\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.113166 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2lkk\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-kube-api-access-w2lkk\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.113300 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-config\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.113370 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f6b3df69-76ea-4424-8d50-b2646cf2cd0e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7kmjp\" (UID: \"f6b3df69-76ea-4424-8d50-b2646cf2cd0e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.113409 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86dp\" (UniqueName: \"kubernetes.io/projected/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-kube-api-access-d86dp\") pod \"collect-profiles-29411175-prrng\" (UID: \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.113438 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b9bbe773-63b6-490e-a058-a12050a40b4a-node-pullsecrets\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.113474 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-oauth-serving-cert\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.113476 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmrnd\" (UniqueName: \"kubernetes.io/projected/94ac6b65-651b-4461-90de-62a5987b52e0-kube-api-access-xmrnd\") pod \"machine-config-operator-74547568cd-2mmnj\" (UID: \"94ac6b65-651b-4461-90de-62a5987b52e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.113544 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b9bbe773-63b6-490e-a058-a12050a40b4a-node-pullsecrets\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.113617 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a996dfa5-84ad-41e6-aee0-ed17df150b5b-serving-cert\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.113740 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsw4g\" (UniqueName: \"kubernetes.io/projected/43cb1081-b058-41a4-a47c-e106aa0d2e41-kube-api-access-hsw4g\") pod \"catalog-operator-68c6474976-g9jnr\" (UID: \"43cb1081-b058-41a4-a47c-e106aa0d2e41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.114157 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.114159 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-etcd-serving-ca\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.114233 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a996dfa5-84ad-41e6-aee0-ed17df150b5b-service-ca-bundle\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.114290 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/90d228ed-836d-4316-b404-a281ae332a8a-machine-approver-tls\") pod \"machine-approver-56656f9798-xtx7c\" (UID: \"90d228ed-836d-4316-b404-a281ae332a8a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.114522 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.114820 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a996dfa5-84ad-41e6-aee0-ed17df150b5b-service-ca-bundle\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.114894 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8f8ce2ea-2335-474b-97c9-1108e2157a2b-signing-key\") pod \"service-ca-9c57cc56f-n9qbk\" (UID: \"8f8ce2ea-2335-474b-97c9-1108e2157a2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.114941 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/334bed7f-2766-4ef7-9eee-9312c1453bbb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9ptzk\" (UID: \"334bed7f-2766-4ef7-9eee-9312c1453bbb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.114993 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-secret-volume\") pod \"collect-profiles-29411175-prrng\" (UID: \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115030 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/34333510-8dc2-45c2-9c08-013bdb2bcd85-stats-auth\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115090 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140261a1-9b1a-4829-8926-6ddaaaf55503-config\") pod \"kube-apiserver-operator-766d6c64bb-hvqhn\" (UID: \"140261a1-9b1a-4829-8926-6ddaaaf55503\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115121 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8e8230b2-fb50-43c7-8a69-af1d02cce895-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m5tws\" (UID: \"8e8230b2-fb50-43c7-8a69-af1d02cce895\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115184 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-audit-policies\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115336 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-mountpoint-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115487 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91034075-af22-4fae-8684-a8914596c1ac-serving-cert\") pod \"console-operator-58897d9998-6gz75\" (UID: \"91034075-af22-4fae-8684-a8914596c1ac\") " pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115604 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e365db0-c1ec-415c-8310-02e222ac80c1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wpzpm\" (UID: \"2e365db0-c1ec-415c-8310-02e222ac80c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115723 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-registry-certificates\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115752 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2367c432-3e23-436c-aff6-31e1c32f8809-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6cr5c\" (UID: \"2367c432-3e23-436c-aff6-31e1c32f8809\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115765 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea1baa74-f09b-497d-a9df-d73953bf8a22-metrics-tls\") pod \"dns-operator-744455d44c-jbwvc\" (UID: \"ea1baa74-f09b-497d-a9df-d73953bf8a22\") " pod="openshift-dns-operator/dns-operator-744455d44c-jbwvc" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115741 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd1e0117-926f-4673-ac70-b25dc56e7403-encryption-config\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115824 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-audit-policies\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115859 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e8230b2-fb50-43c7-8a69-af1d02cce895-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m5tws\" (UID: \"8e8230b2-fb50-43c7-8a69-af1d02cce895\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.115985 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4d5f3363-25e9-4f5b-94ed-843a17d17997-images\") pod \"machine-api-operator-5694c8668f-5n6jf\" (UID: \"4d5f3363-25e9-4f5b-94ed-843a17d17997\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116020 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xntzk\" (UniqueName: \"kubernetes.io/projected/90d228ed-836d-4316-b404-a281ae332a8a-kube-api-access-xntzk\") pod \"machine-approver-56656f9798-xtx7c\" (UID: \"90d228ed-836d-4316-b404-a281ae332a8a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116072 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3bc47fe-80a3-4a18-bada-6eebb2abff12-webhook-cert\") pod \"packageserver-d55dfcdfc-l54rr\" (UID: \"b3bc47fe-80a3-4a18-bada-6eebb2abff12\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116104 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j85g\" (UniqueName: \"kubernetes.io/projected/b9bbe773-63b6-490e-a058-a12050a40b4a-kube-api-access-9j85g\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116108 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-registry-tls\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116134 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbw6g\" (UniqueName: \"kubernetes.io/projected/ea1baa74-f09b-497d-a9df-d73953bf8a22-kube-api-access-zbw6g\") pod \"dns-operator-744455d44c-jbwvc\" (UID: \"ea1baa74-f09b-497d-a9df-d73953bf8a22\") " pod="openshift-dns-operator/dns-operator-744455d44c-jbwvc" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116293 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwzlg\" (UniqueName: \"kubernetes.io/projected/8c1f70ef-1183-4621-bb91-ffe2d31fa391-kube-api-access-cwzlg\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116351 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/43cb1081-b058-41a4-a47c-e106aa0d2e41-srv-cert\") pod \"catalog-operator-68c6474976-g9jnr\" (UID: \"43cb1081-b058-41a4-a47c-e106aa0d2e41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116482 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b9bbe773-63b6-490e-a058-a12050a40b4a-encryption-config\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116510 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/34333510-8dc2-45c2-9c08-013bdb2bcd85-default-certificate\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116542 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-audit-dir\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116673 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94ac6b65-651b-4461-90de-62a5987b52e0-proxy-tls\") pod \"machine-config-operator-74547568cd-2mmnj\" (UID: \"94ac6b65-651b-4461-90de-62a5987b52e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116744 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-config\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116784 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgsns\" (UniqueName: \"kubernetes.io/projected/2e365db0-c1ec-415c-8310-02e222ac80c1-kube-api-access-mgsns\") pod \"cluster-image-registry-operator-dc59b4c8b-wpzpm\" (UID: \"2e365db0-c1ec-415c-8310-02e222ac80c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116810 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-csi-data-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116869 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lvgd\" (UniqueName: \"kubernetes.io/projected/d4dfbfed-09bb-4b36-bcfb-326077388f98-kube-api-access-4lvgd\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116929 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-socket-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.116989 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28ac9884-97cd-4efb-90f1-b742a5a9a519-proxy-tls\") pod \"machine-config-controller-84d6567774-z6sld\" (UID: \"28ac9884-97cd-4efb-90f1-b742a5a9a519\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117023 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-trusted-ca\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117052 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-serving-cert\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117118 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4d5f3363-25e9-4f5b-94ed-843a17d17997-images\") pod \"machine-api-operator-5694c8668f-5n6jf\" (UID: \"4d5f3363-25e9-4f5b-94ed-843a17d17997\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117133 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/334bed7f-2766-4ef7-9eee-9312c1453bbb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9ptzk\" (UID: \"334bed7f-2766-4ef7-9eee-9312c1453bbb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117166 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94ac6b65-651b-4461-90de-62a5987b52e0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2mmnj\" (UID: \"94ac6b65-651b-4461-90de-62a5987b52e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117199 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b69aaa1-6c16-4417-a703-281e816a8642-serving-cert\") pod \"service-ca-operator-777779d784-22ngp\" (UID: \"5b69aaa1-6c16-4417-a703-281e816a8642\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117232 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a996dfa5-84ad-41e6-aee0-ed17df150b5b-serving-cert\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117271 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90d228ed-836d-4316-b404-a281ae332a8a-auth-proxy-config\") pod \"machine-approver-56656f9798-xtx7c\" (UID: \"90d228ed-836d-4316-b404-a281ae332a8a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117297 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrdvm\" (UniqueName: \"kubernetes.io/projected/dbab21cc-3581-4de0-911c-6baad4d03087-kube-api-access-xrdvm\") pod \"multus-admission-controller-857f4d67dd-9hk72\" (UID: \"dbab21cc-3581-4de0-911c-6baad4d03087\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9hk72" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117317 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wg48\" (UniqueName: \"kubernetes.io/projected/80d9d16b-8c37-4d97-8179-12909d9c2f53-kube-api-access-6wg48\") pod \"olm-operator-6b444d44fb-47pfg\" (UID: \"80d9d16b-8c37-4d97-8179-12909d9c2f53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117341 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91034075-af22-4fae-8684-a8914596c1ac-config\") pod \"console-operator-58897d9998-6gz75\" (UID: \"91034075-af22-4fae-8684-a8914596c1ac\") " pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117359 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9bbe773-63b6-490e-a058-a12050a40b4a-serving-cert\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117376 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mf2k\" (UniqueName: \"kubernetes.io/projected/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-kube-api-access-8mf2k\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117394 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/47aad68f-ce63-48d4-8fff-59513ec9f7b6-node-bootstrap-token\") pod \"machine-config-server-qp5kk\" (UID: \"47aad68f-ce63-48d4-8fff-59513ec9f7b6\") " pod="openshift-machine-config-operator/machine-config-server-qp5kk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117272 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-audit-dir\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117474 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/47aad68f-ce63-48d4-8fff-59513ec9f7b6-certs\") pod \"machine-config-server-qp5kk\" (UID: \"47aad68f-ce63-48d4-8fff-59513ec9f7b6\") " pod="openshift-machine-config-operator/machine-config-server-qp5kk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117495 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-config\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117522 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117539 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117543 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117589 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28ac9884-97cd-4efb-90f1-b742a5a9a519-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z6sld\" (UID: \"28ac9884-97cd-4efb-90f1-b742a5a9a519\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117646 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vh2k\" (UniqueName: \"kubernetes.io/projected/a26a2431-9c9a-48e9-9797-bf9ac466fc2f-kube-api-access-4vh2k\") pod \"package-server-manager-789f6589d5-jcqjc\" (UID: \"a26a2431-9c9a-48e9-9797-bf9ac466fc2f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.117678 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt6jm\" (UniqueName: \"kubernetes.io/projected/91034075-af22-4fae-8684-a8914596c1ac-kube-api-access-mt6jm\") pod \"console-operator-58897d9998-6gz75\" (UID: \"91034075-af22-4fae-8684-a8914596c1ac\") " pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.118037 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-oauth-config\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.118505 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-trusted-ca\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.118937 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e365db0-c1ec-415c-8310-02e222ac80c1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wpzpm\" (UID: \"2e365db0-c1ec-415c-8310-02e222ac80c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.119105 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea1baa74-f09b-497d-a9df-d73953bf8a22-metrics-tls\") pod \"dns-operator-744455d44c-jbwvc\" (UID: \"ea1baa74-f09b-497d-a9df-d73953bf8a22\") " pod="openshift-dns-operator/dns-operator-744455d44c-jbwvc" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.119483 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-config\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.119588 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-config\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.119859 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91034075-af22-4fae-8684-a8914596c1ac-serving-cert\") pod \"console-operator-58897d9998-6gz75\" (UID: \"91034075-af22-4fae-8684-a8914596c1ac\") " pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.119923 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91034075-af22-4fae-8684-a8914596c1ac-config\") pod \"console-operator-58897d9998-6gz75\" (UID: \"91034075-af22-4fae-8684-a8914596c1ac\") " pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.120078 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.120252 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b9bbe773-63b6-490e-a058-a12050a40b4a-encryption-config\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.120828 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.121021 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.121041 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-phj8b\" (UID: \"98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.121107 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34333510-8dc2-45c2-9c08-013bdb2bcd85-metrics-certs\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.129071 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-serving-cert\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.129412 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.131859 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b9bbe773-63b6-490e-a058-a12050a40b4a-etcd-client\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.133326 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.133440 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.133568 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.134168 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9bbe773-63b6-490e-a058-a12050a40b4a-serving-cert\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.143504 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.154237 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" event={"ID":"9991ec66-70eb-4442-9e35-34e05d7c0dfd","Type":"ContainerStarted","Data":"e1aa391e5e6f4251c105724afa6d4525e50fe953768c2f85e7973f4e3044c940"} Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.156022 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" event={"ID":"72175ecc-cd0d-451c-a30d-59962898bec9","Type":"ContainerStarted","Data":"a2dac00ea6ffb144ea82f761d96c78672eab9b57a043c7e7ef981e506bf1732e"} Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.157007 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" event={"ID":"5fc9623a-e271-4424-bb04-a0a502c81a8a","Type":"ContainerStarted","Data":"1a74a0ec29be27939e6fa2d677015f41619924cfa67c82464af5e4aab01f24e8"} Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.163690 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.183374 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.193501 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/764e4272-f3e2-4a3f-a390-7929850c7150-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2rvdj\" (UID: \"764e4272-f3e2-4a3f-a390-7929850c7150\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.206670 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.212351 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-audit\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.218593 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.218811 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e8230b2-fb50-43c7-8a69-af1d02cce895-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m5tws\" (UID: \"8e8230b2-fb50-43c7-8a69-af1d02cce895\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.218846 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xntzk\" (UniqueName: \"kubernetes.io/projected/90d228ed-836d-4316-b404-a281ae332a8a-kube-api-access-xntzk\") pod \"machine-approver-56656f9798-xtx7c\" (UID: \"90d228ed-836d-4316-b404-a281ae332a8a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.218865 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3bc47fe-80a3-4a18-bada-6eebb2abff12-webhook-cert\") pod \"packageserver-d55dfcdfc-l54rr\" (UID: \"b3bc47fe-80a3-4a18-bada-6eebb2abff12\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.218905 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/43cb1081-b058-41a4-a47c-e106aa0d2e41-srv-cert\") pod \"catalog-operator-68c6474976-g9jnr\" (UID: \"43cb1081-b058-41a4-a47c-e106aa0d2e41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.218926 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94ac6b65-651b-4461-90de-62a5987b52e0-proxy-tls\") pod \"machine-config-operator-74547568cd-2mmnj\" (UID: \"94ac6b65-651b-4461-90de-62a5987b52e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.218964 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-csi-data-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.218981 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lvgd\" (UniqueName: \"kubernetes.io/projected/d4dfbfed-09bb-4b36-bcfb-326077388f98-kube-api-access-4lvgd\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219010 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/334bed7f-2766-4ef7-9eee-9312c1453bbb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9ptzk\" (UID: \"334bed7f-2766-4ef7-9eee-9312c1453bbb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219032 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94ac6b65-651b-4461-90de-62a5987b52e0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2mmnj\" (UID: \"94ac6b65-651b-4461-90de-62a5987b52e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219054 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-socket-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219077 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28ac9884-97cd-4efb-90f1-b742a5a9a519-proxy-tls\") pod \"machine-config-controller-84d6567774-z6sld\" (UID: \"28ac9884-97cd-4efb-90f1-b742a5a9a519\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219102 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90d228ed-836d-4316-b404-a281ae332a8a-auth-proxy-config\") pod \"machine-approver-56656f9798-xtx7c\" (UID: \"90d228ed-836d-4316-b404-a281ae332a8a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219123 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b69aaa1-6c16-4417-a703-281e816a8642-serving-cert\") pod \"service-ca-operator-777779d784-22ngp\" (UID: \"5b69aaa1-6c16-4417-a703-281e816a8642\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219161 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/47aad68f-ce63-48d4-8fff-59513ec9f7b6-node-bootstrap-token\") pod \"machine-config-server-qp5kk\" (UID: \"47aad68f-ce63-48d4-8fff-59513ec9f7b6\") " pod="openshift-machine-config-operator/machine-config-server-qp5kk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219182 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/47aad68f-ce63-48d4-8fff-59513ec9f7b6-certs\") pod \"machine-config-server-qp5kk\" (UID: \"47aad68f-ce63-48d4-8fff-59513ec9f7b6\") " pod="openshift-machine-config-operator/machine-config-server-qp5kk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219200 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrdvm\" (UniqueName: \"kubernetes.io/projected/dbab21cc-3581-4de0-911c-6baad4d03087-kube-api-access-xrdvm\") pod \"multus-admission-controller-857f4d67dd-9hk72\" (UID: \"dbab21cc-3581-4de0-911c-6baad4d03087\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9hk72" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219292 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wg48\" (UniqueName: \"kubernetes.io/projected/80d9d16b-8c37-4d97-8179-12909d9c2f53-kube-api-access-6wg48\") pod \"olm-operator-6b444d44fb-47pfg\" (UID: \"80d9d16b-8c37-4d97-8179-12909d9c2f53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219312 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28ac9884-97cd-4efb-90f1-b742a5a9a519-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z6sld\" (UID: \"28ac9884-97cd-4efb-90f1-b742a5a9a519\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219332 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vh2k\" (UniqueName: \"kubernetes.io/projected/a26a2431-9c9a-48e9-9797-bf9ac466fc2f-kube-api-access-4vh2k\") pod \"package-server-manager-789f6589d5-jcqjc\" (UID: \"a26a2431-9c9a-48e9-9797-bf9ac466fc2f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219356 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3bc47fe-80a3-4a18-bada-6eebb2abff12-apiservice-cert\") pod \"packageserver-d55dfcdfc-l54rr\" (UID: \"b3bc47fe-80a3-4a18-bada-6eebb2abff12\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219373 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h68g\" (UniqueName: \"kubernetes.io/projected/b3bc47fe-80a3-4a18-bada-6eebb2abff12-kube-api-access-7h68g\") pod \"packageserver-d55dfcdfc-l54rr\" (UID: \"b3bc47fe-80a3-4a18-bada-6eebb2abff12\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219388 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/334bed7f-2766-4ef7-9eee-9312c1453bbb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9ptzk\" (UID: \"334bed7f-2766-4ef7-9eee-9312c1453bbb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219445 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf74s\" (UniqueName: \"kubernetes.io/projected/dadea7f6-d37c-458a-9f16-4d3477f4b6f3-kube-api-access-sf74s\") pod \"kube-storage-version-migrator-operator-b67b599dd-5lktx\" (UID: \"dadea7f6-d37c-458a-9f16-4d3477f4b6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219475 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wptz\" (UniqueName: \"kubernetes.io/projected/bf7b0fe2-c9dc-4486-926c-0432cfc03172-kube-api-access-4wptz\") pod \"ingress-canary-n54fh\" (UID: \"bf7b0fe2-c9dc-4486-926c-0432cfc03172\") " pod="openshift-ingress-canary/ingress-canary-n54fh" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219508 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94ac6b65-651b-4461-90de-62a5987b52e0-images\") pod \"machine-config-operator-74547568cd-2mmnj\" (UID: \"94ac6b65-651b-4461-90de-62a5987b52e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219523 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf7b0fe2-c9dc-4486-926c-0432cfc03172-cert\") pod \"ingress-canary-n54fh\" (UID: \"bf7b0fe2-c9dc-4486-926c-0432cfc03172\") " pod="openshift-ingress-canary/ingress-canary-n54fh" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219538 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-config-volume\") pod \"collect-profiles-29411175-prrng\" (UID: \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219558 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/140261a1-9b1a-4829-8926-6ddaaaf55503-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hvqhn\" (UID: \"140261a1-9b1a-4829-8926-6ddaaaf55503\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219579 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b69aaa1-6c16-4417-a703-281e816a8642-config\") pod \"service-ca-operator-777779d784-22ngp\" (UID: \"5b69aaa1-6c16-4417-a703-281e816a8642\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219643 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/80d9d16b-8c37-4d97-8179-12909d9c2f53-srv-cert\") pod \"olm-operator-6b444d44fb-47pfg\" (UID: \"80d9d16b-8c37-4d97-8179-12909d9c2f53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219667 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4sbm\" (UniqueName: \"kubernetes.io/projected/eb441657-98d8-4fbe-9f7f-50c64f1414ba-kube-api-access-f4sbm\") pod \"migrator-59844c95c7-wrrqk\" (UID: \"eb441657-98d8-4fbe-9f7f-50c64f1414ba\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wrrqk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219691 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lz9s\" (UniqueName: \"kubernetes.io/projected/8f8ce2ea-2335-474b-97c9-1108e2157a2b-kube-api-access-4lz9s\") pod \"service-ca-9c57cc56f-n9qbk\" (UID: \"8f8ce2ea-2335-474b-97c9-1108e2157a2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219714 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvnf5\" (UniqueName: \"kubernetes.io/projected/80ff1a4d-6503-4de6-b29a-d5fcfbefc729-kube-api-access-jvnf5\") pod \"ingress-operator-5b745b69d9-st4hg\" (UID: \"80ff1a4d-6503-4de6-b29a-d5fcfbefc729\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219730 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mskn\" (UniqueName: \"kubernetes.io/projected/5b69aaa1-6c16-4417-a703-281e816a8642-kube-api-access-2mskn\") pod \"service-ca-operator-777779d784-22ngp\" (UID: \"5b69aaa1-6c16-4417-a703-281e816a8642\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219749 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d813713-0bc0-4865-a7a2-dcf1a15ae12f-config-volume\") pod \"dns-default-qzmsv\" (UID: \"4d813713-0bc0-4865-a7a2-dcf1a15ae12f\") " pod="openshift-dns/dns-default-qzmsv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219764 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dadea7f6-d37c-458a-9f16-4d3477f4b6f3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5lktx\" (UID: \"dadea7f6-d37c-458a-9f16-4d3477f4b6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219806 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d228ed-836d-4316-b404-a281ae332a8a-config\") pod \"machine-approver-56656f9798-xtx7c\" (UID: \"90d228ed-836d-4316-b404-a281ae332a8a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219821 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dbab21cc-3581-4de0-911c-6baad4d03087-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9hk72\" (UID: \"dbab21cc-3581-4de0-911c-6baad4d03087\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9hk72" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219836 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-registration-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219858 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a26a2431-9c9a-48e9-9797-bf9ac466fc2f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jcqjc\" (UID: \"a26a2431-9c9a-48e9-9797-bf9ac466fc2f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219886 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-plugins-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219902 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8f8ce2ea-2335-474b-97c9-1108e2157a2b-signing-cabundle\") pod \"service-ca-9c57cc56f-n9qbk\" (UID: \"8f8ce2ea-2335-474b-97c9-1108e2157a2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219926 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80ff1a4d-6503-4de6-b29a-d5fcfbefc729-bound-sa-token\") pod \"ingress-operator-5b745b69d9-st4hg\" (UID: \"80ff1a4d-6503-4de6-b29a-d5fcfbefc729\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219980 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/140261a1-9b1a-4829-8926-6ddaaaf55503-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hvqhn\" (UID: \"140261a1-9b1a-4829-8926-6ddaaaf55503\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.219997 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w52lp\" (UniqueName: \"kubernetes.io/projected/47aad68f-ce63-48d4-8fff-59513ec9f7b6-kube-api-access-w52lp\") pod \"machine-config-server-qp5kk\" (UID: \"47aad68f-ce63-48d4-8fff-59513ec9f7b6\") " pod="openshift-machine-config-operator/machine-config-server-qp5kk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220022 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq98b\" (UniqueName: \"kubernetes.io/projected/4d813713-0bc0-4865-a7a2-dcf1a15ae12f-kube-api-access-bq98b\") pod \"dns-default-qzmsv\" (UID: \"4d813713-0bc0-4865-a7a2-dcf1a15ae12f\") " pod="openshift-dns/dns-default-qzmsv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220037 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d813713-0bc0-4865-a7a2-dcf1a15ae12f-metrics-tls\") pod \"dns-default-qzmsv\" (UID: \"4d813713-0bc0-4865-a7a2-dcf1a15ae12f\") " pod="openshift-dns/dns-default-qzmsv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220052 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl9sx\" (UniqueName: \"kubernetes.io/projected/28ac9884-97cd-4efb-90f1-b742a5a9a519-kube-api-access-nl9sx\") pod \"machine-config-controller-84d6567774-z6sld\" (UID: \"28ac9884-97cd-4efb-90f1-b742a5a9a519\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220068 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/80d9d16b-8c37-4d97-8179-12909d9c2f53-profile-collector-cert\") pod \"olm-operator-6b444d44fb-47pfg\" (UID: \"80d9d16b-8c37-4d97-8179-12909d9c2f53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220091 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jctq\" (UniqueName: \"kubernetes.io/projected/8e8230b2-fb50-43c7-8a69-af1d02cce895-kube-api-access-6jctq\") pod \"marketplace-operator-79b997595-m5tws\" (UID: \"8e8230b2-fb50-43c7-8a69-af1d02cce895\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220112 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadea7f6-d37c-458a-9f16-4d3477f4b6f3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5lktx\" (UID: \"dadea7f6-d37c-458a-9f16-4d3477f4b6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220128 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80ff1a4d-6503-4de6-b29a-d5fcfbefc729-trusted-ca\") pod \"ingress-operator-5b745b69d9-st4hg\" (UID: \"80ff1a4d-6503-4de6-b29a-d5fcfbefc729\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220145 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7nqn\" (UniqueName: \"kubernetes.io/projected/f6b3df69-76ea-4424-8d50-b2646cf2cd0e-kube-api-access-l7nqn\") pod \"control-plane-machine-set-operator-78cbb6b69f-7kmjp\" (UID: \"f6b3df69-76ea-4424-8d50-b2646cf2cd0e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220161 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80ff1a4d-6503-4de6-b29a-d5fcfbefc729-metrics-tls\") pod \"ingress-operator-5b745b69d9-st4hg\" (UID: \"80ff1a4d-6503-4de6-b29a-d5fcfbefc729\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220175 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b3bc47fe-80a3-4a18-bada-6eebb2abff12-tmpfs\") pod \"packageserver-d55dfcdfc-l54rr\" (UID: \"b3bc47fe-80a3-4a18-bada-6eebb2abff12\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220191 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/43cb1081-b058-41a4-a47c-e106aa0d2e41-profile-collector-cert\") pod \"catalog-operator-68c6474976-g9jnr\" (UID: \"43cb1081-b058-41a4-a47c-e106aa0d2e41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220249 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f6b3df69-76ea-4424-8d50-b2646cf2cd0e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7kmjp\" (UID: \"f6b3df69-76ea-4424-8d50-b2646cf2cd0e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220267 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d86dp\" (UniqueName: \"kubernetes.io/projected/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-kube-api-access-d86dp\") pod \"collect-profiles-29411175-prrng\" (UID: \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220292 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmrnd\" (UniqueName: \"kubernetes.io/projected/94ac6b65-651b-4461-90de-62a5987b52e0-kube-api-access-xmrnd\") pod \"machine-config-operator-74547568cd-2mmnj\" (UID: \"94ac6b65-651b-4461-90de-62a5987b52e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220309 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsw4g\" (UniqueName: \"kubernetes.io/projected/43cb1081-b058-41a4-a47c-e106aa0d2e41-kube-api-access-hsw4g\") pod \"catalog-operator-68c6474976-g9jnr\" (UID: \"43cb1081-b058-41a4-a47c-e106aa0d2e41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220329 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/90d228ed-836d-4316-b404-a281ae332a8a-machine-approver-tls\") pod \"machine-approver-56656f9798-xtx7c\" (UID: \"90d228ed-836d-4316-b404-a281ae332a8a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220345 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/334bed7f-2766-4ef7-9eee-9312c1453bbb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9ptzk\" (UID: \"334bed7f-2766-4ef7-9eee-9312c1453bbb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220359 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-secret-volume\") pod \"collect-profiles-29411175-prrng\" (UID: \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220376 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8f8ce2ea-2335-474b-97c9-1108e2157a2b-signing-key\") pod \"service-ca-9c57cc56f-n9qbk\" (UID: \"8f8ce2ea-2335-474b-97c9-1108e2157a2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220374 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e8230b2-fb50-43c7-8a69-af1d02cce895-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m5tws\" (UID: \"8e8230b2-fb50-43c7-8a69-af1d02cce895\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220392 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8e8230b2-fb50-43c7-8a69-af1d02cce895-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m5tws\" (UID: \"8e8230b2-fb50-43c7-8a69-af1d02cce895\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220487 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140261a1-9b1a-4829-8926-6ddaaaf55503-config\") pod \"kube-apiserver-operator-766d6c64bb-hvqhn\" (UID: \"140261a1-9b1a-4829-8926-6ddaaaf55503\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220503 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-csi-data-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220521 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-mountpoint-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.220711 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-mountpoint-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.220802 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:56.720776258 +0000 UTC m=+146.430142795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.221529 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140261a1-9b1a-4829-8926-6ddaaaf55503-config\") pod \"kube-apiserver-operator-766d6c64bb-hvqhn\" (UID: \"140261a1-9b1a-4829-8926-6ddaaaf55503\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.221964 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadea7f6-d37c-458a-9f16-4d3477f4b6f3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5lktx\" (UID: \"dadea7f6-d37c-458a-9f16-4d3477f4b6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.221967 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94ac6b65-651b-4461-90de-62a5987b52e0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2mmnj\" (UID: \"94ac6b65-651b-4461-90de-62a5987b52e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.222074 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d228ed-836d-4316-b404-a281ae332a8a-config\") pod \"machine-approver-56656f9798-xtx7c\" (UID: \"90d228ed-836d-4316-b404-a281ae332a8a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.222598 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80ff1a4d-6503-4de6-b29a-d5fcfbefc729-trusted-ca\") pod \"ingress-operator-5b745b69d9-st4hg\" (UID: \"80ff1a4d-6503-4de6-b29a-d5fcfbefc729\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.222726 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28ac9884-97cd-4efb-90f1-b742a5a9a519-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z6sld\" (UID: \"28ac9884-97cd-4efb-90f1-b742a5a9a519\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.223046 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94ac6b65-651b-4461-90de-62a5987b52e0-images\") pod \"machine-config-operator-74547568cd-2mmnj\" (UID: \"94ac6b65-651b-4461-90de-62a5987b52e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.223170 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3bc47fe-80a3-4a18-bada-6eebb2abff12-webhook-cert\") pod \"packageserver-d55dfcdfc-l54rr\" (UID: \"b3bc47fe-80a3-4a18-bada-6eebb2abff12\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.223370 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/334bed7f-2766-4ef7-9eee-9312c1453bbb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9ptzk\" (UID: \"334bed7f-2766-4ef7-9eee-9312c1453bbb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.223683 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-registration-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.223709 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94ac6b65-651b-4461-90de-62a5987b52e0-proxy-tls\") pod \"machine-config-operator-74547568cd-2mmnj\" (UID: \"94ac6b65-651b-4461-90de-62a5987b52e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.224426 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/43cb1081-b058-41a4-a47c-e106aa0d2e41-srv-cert\") pod \"catalog-operator-68c6474976-g9jnr\" (UID: \"43cb1081-b058-41a4-a47c-e106aa0d2e41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.224613 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/47aad68f-ce63-48d4-8fff-59513ec9f7b6-certs\") pod \"machine-config-server-qp5kk\" (UID: \"47aad68f-ce63-48d4-8fff-59513ec9f7b6\") " pod="openshift-machine-config-operator/machine-config-server-qp5kk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.224707 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-socket-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.225759 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/43cb1081-b058-41a4-a47c-e106aa0d2e41-profile-collector-cert\") pod \"catalog-operator-68c6474976-g9jnr\" (UID: \"43cb1081-b058-41a4-a47c-e106aa0d2e41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.226088 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b3bc47fe-80a3-4a18-bada-6eebb2abff12-tmpfs\") pod \"packageserver-d55dfcdfc-l54rr\" (UID: \"b3bc47fe-80a3-4a18-bada-6eebb2abff12\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.226385 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8f8ce2ea-2335-474b-97c9-1108e2157a2b-signing-key\") pod \"service-ca-9c57cc56f-n9qbk\" (UID: \"8f8ce2ea-2335-474b-97c9-1108e2157a2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.226799 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3bc47fe-80a3-4a18-bada-6eebb2abff12-apiservice-cert\") pod \"packageserver-d55dfcdfc-l54rr\" (UID: \"b3bc47fe-80a3-4a18-bada-6eebb2abff12\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.227054 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf7b0fe2-c9dc-4486-926c-0432cfc03172-cert\") pod \"ingress-canary-n54fh\" (UID: \"bf7b0fe2-c9dc-4486-926c-0432cfc03172\") " pod="openshift-ingress-canary/ingress-canary-n54fh" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.227352 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f6b3df69-76ea-4424-8d50-b2646cf2cd0e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7kmjp\" (UID: \"f6b3df69-76ea-4424-8d50-b2646cf2cd0e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.227474 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d813713-0bc0-4865-a7a2-dcf1a15ae12f-metrics-tls\") pod \"dns-default-qzmsv\" (UID: \"4d813713-0bc0-4865-a7a2-dcf1a15ae12f\") " pod="openshift-dns/dns-default-qzmsv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.227612 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d4dfbfed-09bb-4b36-bcfb-326077388f98-plugins-dir\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.227838 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-config-volume\") pod \"collect-profiles-29411175-prrng\" (UID: \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.228357 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dbab21cc-3581-4de0-911c-6baad4d03087-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9hk72\" (UID: \"dbab21cc-3581-4de0-911c-6baad4d03087\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9hk72" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.229072 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b69aaa1-6c16-4417-a703-281e816a8642-config\") pod \"service-ca-operator-777779d784-22ngp\" (UID: \"5b69aaa1-6c16-4417-a703-281e816a8642\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.229076 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8f8ce2ea-2335-474b-97c9-1108e2157a2b-signing-cabundle\") pod \"service-ca-9c57cc56f-n9qbk\" (UID: \"8f8ce2ea-2335-474b-97c9-1108e2157a2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.229666 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8e8230b2-fb50-43c7-8a69-af1d02cce895-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m5tws\" (UID: \"8e8230b2-fb50-43c7-8a69-af1d02cce895\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.230568 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/334bed7f-2766-4ef7-9eee-9312c1453bbb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9ptzk\" (UID: \"334bed7f-2766-4ef7-9eee-9312c1453bbb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.231442 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/80d9d16b-8c37-4d97-8179-12909d9c2f53-profile-collector-cert\") pod \"olm-operator-6b444d44fb-47pfg\" (UID: \"80d9d16b-8c37-4d97-8179-12909d9c2f53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.231813 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/47aad68f-ce63-48d4-8fff-59513ec9f7b6-node-bootstrap-token\") pod \"machine-config-server-qp5kk\" (UID: \"47aad68f-ce63-48d4-8fff-59513ec9f7b6\") " pod="openshift-machine-config-operator/machine-config-server-qp5kk" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.232067 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d813713-0bc0-4865-a7a2-dcf1a15ae12f-config-volume\") pod \"dns-default-qzmsv\" (UID: \"4d813713-0bc0-4865-a7a2-dcf1a15ae12f\") " pod="openshift-dns/dns-default-qzmsv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.232254 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-secret-volume\") pod \"collect-profiles-29411175-prrng\" (UID: \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.232609 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.233010 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/80d9d16b-8c37-4d97-8179-12909d9c2f53-srv-cert\") pod \"olm-operator-6b444d44fb-47pfg\" (UID: \"80d9d16b-8c37-4d97-8179-12909d9c2f53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.233142 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b69aaa1-6c16-4417-a703-281e816a8642-serving-cert\") pod \"service-ca-operator-777779d784-22ngp\" (UID: \"5b69aaa1-6c16-4417-a703-281e816a8642\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.234195 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/140261a1-9b1a-4829-8926-6ddaaaf55503-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hvqhn\" (UID: \"140261a1-9b1a-4829-8926-6ddaaaf55503\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.234729 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dadea7f6-d37c-458a-9f16-4d3477f4b6f3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5lktx\" (UID: \"dadea7f6-d37c-458a-9f16-4d3477f4b6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.235246 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/90d228ed-836d-4316-b404-a281ae332a8a-machine-approver-tls\") pod \"machine-approver-56656f9798-xtx7c\" (UID: \"90d228ed-836d-4316-b404-a281ae332a8a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.243099 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.267665 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.284153 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.287296 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd1e0117-926f-4673-ac70-b25dc56e7403-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.303965 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.321848 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.322208 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:56.822191634 +0000 UTC m=+146.531558071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.324365 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.343358 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.351846 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd1e0117-926f-4673-ac70-b25dc56e7403-serving-cert\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.363654 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.370967 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd1e0117-926f-4673-ac70-b25dc56e7403-audit-policies\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.383372 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.403889 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.423312 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.423522 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:56.923492347 +0000 UTC m=+146.632858814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.423705 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.424049 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.424573 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:56.924559825 +0000 UTC m=+146.633926282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.435619 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80ff1a4d-6503-4de6-b29a-d5fcfbefc729-metrics-tls\") pod \"ingress-operator-5b745b69d9-st4hg\" (UID: \"80ff1a4d-6503-4de6-b29a-d5fcfbefc729\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.435697 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28ac9884-97cd-4efb-90f1-b742a5a9a519-proxy-tls\") pod \"machine-config-controller-84d6567774-z6sld\" (UID: \"28ac9884-97cd-4efb-90f1-b742a5a9a519\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.435709 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmkcs\" (UniqueName: \"kubernetes.io/projected/764e4272-f3e2-4a3f-a390-7929850c7150-kube-api-access-qmkcs\") pod \"cluster-samples-operator-665b6dd947-2rvdj\" (UID: \"764e4272-f3e2-4a3f-a390-7929850c7150\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.435835 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a26a2431-9c9a-48e9-9797-bf9ac466fc2f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jcqjc\" (UID: \"a26a2431-9c9a-48e9-9797-bf9ac466fc2f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.436181 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919-config\") pod \"kube-controller-manager-operator-78b949d7b-phj8b\" (UID: \"98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.436236 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.436586 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067e3491-3d3c-4bc6-a164-9093f895fbcf-serving-cert\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.436796 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90d228ed-836d-4316-b404-a281ae332a8a-auth-proxy-config\") pod \"machine-approver-56656f9798-xtx7c\" (UID: \"90d228ed-836d-4316-b404-a281ae332a8a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.436861 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-client-ca\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.443931 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.451075 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd1e0117-926f-4673-ac70-b25dc56e7403-etcd-client\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.463827 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.482943 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.493087 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53da86b0-43ce-4526-97db-a82df759ef58-serving-cert\") pod \"route-controller-manager-6576b87f9c-qjz2m\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.502838 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.524168 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.525326 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.525666 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.025630001 +0000 UTC m=+146.734996468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.525873 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.526262 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.026252859 +0000 UTC m=+146.735619306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.527630 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53da86b0-43ce-4526-97db-a82df759ef58-config\") pod \"route-controller-manager-6576b87f9c-qjz2m\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.543722 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.549922 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd1e0117-926f-4673-ac70-b25dc56e7403-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.563405 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.584093 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.593893 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2367c432-3e23-436c-aff6-31e1c32f8809-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6cr5c\" (UID: \"2367c432-3e23-436c-aff6-31e1c32f8809\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.603588 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.612423 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5f3363-25e9-4f5b-94ed-843a17d17997-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5n6jf\" (UID: \"4d5f3363-25e9-4f5b-94ed-843a17d17997\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.623735 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.625213 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b9bbe773-63b6-490e-a058-a12050a40b4a-etcd-serving-ca\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.627493 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.627648 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.127625413 +0000 UTC m=+146.836991860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.627745 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.628161 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.128110547 +0000 UTC m=+146.837477004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.643991 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.653342 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.663638 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.669348 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d5f3363-25e9-4f5b-94ed-843a17d17997-config\") pod \"machine-api-operator-5694c8668f-5n6jf\" (UID: \"4d5f3363-25e9-4f5b-94ed-843a17d17997\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.718724 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh5sg\" (UniqueName: \"kubernetes.io/projected/585fe769-d9ad-42f1-8cb6-29904018f637-kube-api-access-kh5sg\") pod \"downloads-7954f5f757-4m2lb\" (UID: \"585fe769-d9ad-42f1-8cb6-29904018f637\") " pod="openshift-console/downloads-7954f5f757-4m2lb" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.731155 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.731497 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.231443294 +0000 UTC m=+146.940809741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.732480 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.732937 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.232919045 +0000 UTC m=+146.942285492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.746703 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlvxs\" (UniqueName: \"kubernetes.io/projected/4d5f3363-25e9-4f5b-94ed-843a17d17997-kube-api-access-hlvxs\") pod \"machine-api-operator-5694c8668f-5n6jf\" (UID: \"4d5f3363-25e9-4f5b-94ed-843a17d17997\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.771015 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8tcc\" (UniqueName: \"kubernetes.io/projected/2367c432-3e23-436c-aff6-31e1c32f8809-kube-api-access-b8tcc\") pod \"openshift-apiserver-operator-796bbdcf4f-6cr5c\" (UID: \"2367c432-3e23-436c-aff6-31e1c32f8809\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.784675 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-bound-sa-token\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.798346 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8mwl\" (UniqueName: \"kubernetes.io/projected/34333510-8dc2-45c2-9c08-013bdb2bcd85-kube-api-access-r8mwl\") pod \"router-default-5444994796-zrj4j\" (UID: \"34333510-8dc2-45c2-9c08-013bdb2bcd85\") " pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.818745 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxpdb\" (UniqueName: \"kubernetes.io/projected/067e3491-3d3c-4bc6-a164-9093f895fbcf-kube-api-access-vxpdb\") pod \"controller-manager-879f6c89f-49njp\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.834852 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj"] Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.834933 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.835115 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.335083571 +0000 UTC m=+147.044450018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.838316 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.838859 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.338842893 +0000 UTC m=+147.048209340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.845352 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b49lf\" (UniqueName: \"kubernetes.io/projected/53da86b0-43ce-4526-97db-a82df759ef58-kube-api-access-b49lf\") pod \"route-controller-manager-6576b87f9c-qjz2m\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.860238 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-phj8b\" (UID: \"98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.886206 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv598\" (UniqueName: \"kubernetes.io/projected/a996dfa5-84ad-41e6-aee0-ed17df150b5b-kube-api-access-xv598\") pod \"authentication-operator-69f744f599-rc7wl\" (UID: \"a996dfa5-84ad-41e6-aee0-ed17df150b5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.890992 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.908492 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e365db0-c1ec-415c-8310-02e222ac80c1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wpzpm\" (UID: \"2e365db0-c1ec-415c-8310-02e222ac80c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.923394 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.925964 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc49d\" (UniqueName: \"kubernetes.io/projected/fd1e0117-926f-4673-ac70-b25dc56e7403-kube-api-access-bc49d\") pod \"apiserver-7bbb656c7d-mhgqn\" (UID: \"fd1e0117-926f-4673-ac70-b25dc56e7403\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.940385 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.940558 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.440523827 +0000 UTC m=+147.149890274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.941221 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: E1202 10:15:56.941652 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.441612617 +0000 UTC m=+147.150979064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.941692 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2lkk\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-kube-api-access-w2lkk\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.969688 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.971991 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbw6g\" (UniqueName: \"kubernetes.io/projected/ea1baa74-f09b-497d-a9df-d73953bf8a22-kube-api-access-zbw6g\") pod \"dns-operator-744455d44c-jbwvc\" (UID: \"ea1baa74-f09b-497d-a9df-d73953bf8a22\") " pod="openshift-dns-operator/dns-operator-744455d44c-jbwvc" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.977296 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" Dec 02 10:15:56 crc kubenswrapper[4711]: I1202 10:15:56.990595 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j85g\" (UniqueName: \"kubernetes.io/projected/b9bbe773-63b6-490e-a058-a12050a40b4a-kube-api-access-9j85g\") pod \"apiserver-76f77b778f-2c7s8\" (UID: \"b9bbe773-63b6-490e-a058-a12050a40b4a\") " pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.013796 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4m2lb" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.018527 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgsns\" (UniqueName: \"kubernetes.io/projected/2e365db0-c1ec-415c-8310-02e222ac80c1-kube-api-access-mgsns\") pod \"cluster-image-registry-operator-dc59b4c8b-wpzpm\" (UID: \"2e365db0-c1ec-415c-8310-02e222ac80c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.019152 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwzlg\" (UniqueName: \"kubernetes.io/projected/8c1f70ef-1183-4621-bb91-ffe2d31fa391-kube-api-access-cwzlg\") pod \"console-f9d7485db-g2lxx\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.036794 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.042967 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:57 crc kubenswrapper[4711]: E1202 10:15:57.043456 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.543438313 +0000 UTC m=+147.252804760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.050199 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jbwvc" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.058917 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt6jm\" (UniqueName: \"kubernetes.io/projected/91034075-af22-4fae-8684-a8914596c1ac-kube-api-access-mt6jm\") pod \"console-operator-58897d9998-6gz75\" (UID: \"91034075-af22-4fae-8684-a8914596c1ac\") " pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.067153 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.067369 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mf2k\" (UniqueName: \"kubernetes.io/projected/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-kube-api-access-8mf2k\") pod \"oauth-openshift-558db77b4-6sr4n\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.076600 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.084764 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lvgd\" (UniqueName: \"kubernetes.io/projected/d4dfbfed-09bb-4b36-bcfb-326077388f98-kube-api-access-4lvgd\") pod \"csi-hostpathplugin-mrbgr\" (UID: \"d4dfbfed-09bb-4b36-bcfb-326077388f98\") " pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.101440 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xntzk\" (UniqueName: \"kubernetes.io/projected/90d228ed-836d-4316-b404-a281ae332a8a-kube-api-access-xntzk\") pod \"machine-approver-56656f9798-xtx7c\" (UID: \"90d228ed-836d-4316-b404-a281ae332a8a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.116936 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/334bed7f-2766-4ef7-9eee-9312c1453bbb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9ptzk\" (UID: \"334bed7f-2766-4ef7-9eee-9312c1453bbb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.141216 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.144054 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-49njp"] Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.144659 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:57 crc kubenswrapper[4711]: E1202 10:15:57.145036 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.645023334 +0000 UTC m=+147.354389781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.147463 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d86dp\" (UniqueName: \"kubernetes.io/projected/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-kube-api-access-d86dp\") pod \"collect-profiles-29411175-prrng\" (UID: \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.163287 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.164447 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrdvm\" (UniqueName: \"kubernetes.io/projected/dbab21cc-3581-4de0-911c-6baad4d03087-kube-api-access-xrdvm\") pod \"multus-admission-controller-857f4d67dd-9hk72\" (UID: \"dbab21cc-3581-4de0-911c-6baad4d03087\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9hk72" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.169283 4711 generic.go:334] "Generic (PLEG): container finished" podID="72175ecc-cd0d-451c-a30d-59962898bec9" containerID="71b67150e180e83ad0b3e6cd5a0c5292e9172a871e77621664c3be47367aebe4" exitCode=0 Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.169346 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" event={"ID":"72175ecc-cd0d-451c-a30d-59962898bec9","Type":"ContainerDied","Data":"71b67150e180e83ad0b3e6cd5a0c5292e9172a871e77621664c3be47367aebe4"} Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.178928 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" event={"ID":"9991ec66-70eb-4442-9e35-34e05d7c0dfd","Type":"ContainerStarted","Data":"b1a2aed1f358f51d79f5fc01083cf39a324717283585d6092a4bdfc10dbd540b"} Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.181194 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wg48\" (UniqueName: \"kubernetes.io/projected/80d9d16b-8c37-4d97-8179-12909d9c2f53-kube-api-access-6wg48\") pod \"olm-operator-6b444d44fb-47pfg\" (UID: \"80d9d16b-8c37-4d97-8179-12909d9c2f53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.183025 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m"] Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.183323 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.184571 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zrj4j" event={"ID":"34333510-8dc2-45c2-9c08-013bdb2bcd85","Type":"ContainerStarted","Data":"cc6bed66c97ebe3dced6d188eee1ce25381b7f27c020be937326c09fb91860ea"} Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.187520 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9hk72" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.191820 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" event={"ID":"5fc9623a-e271-4424-bb04-a0a502c81a8a","Type":"ContainerStarted","Data":"5058ec181ecefdcb62f0780ae775402b5e34633acae55a98dc15e6ee6492709a"} Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.193850 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" event={"ID":"764e4272-f3e2-4a3f-a390-7929850c7150","Type":"ContainerStarted","Data":"2d94e8116af2a0a99176c8652c5c455bd9268b64f92ee25a2b97817127bcdca0"} Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.193904 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" event={"ID":"764e4272-f3e2-4a3f-a390-7929850c7150","Type":"ContainerStarted","Data":"86c32354bbf99065ab00c17fd44d9afbf2827d29e650d0eec788e28bdd04003e"} Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.199519 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf74s\" (UniqueName: \"kubernetes.io/projected/dadea7f6-d37c-458a-9f16-4d3477f4b6f3-kube-api-access-sf74s\") pod \"kube-storage-version-migrator-operator-b67b599dd-5lktx\" (UID: \"dadea7f6-d37c-458a-9f16-4d3477f4b6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.211703 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.222592 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.223823 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wptz\" (UniqueName: \"kubernetes.io/projected/bf7b0fe2-c9dc-4486-926c-0432cfc03172-kube-api-access-4wptz\") pod \"ingress-canary-n54fh\" (UID: \"bf7b0fe2-c9dc-4486-926c-0432cfc03172\") " pod="openshift-ingress-canary/ingress-canary-n54fh" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.228259 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.240774 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5n6jf"] Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.242255 4711 request.go:700] Waited for 1.01939151s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.243317 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmrnd\" (UniqueName: \"kubernetes.io/projected/94ac6b65-651b-4461-90de-62a5987b52e0-kube-api-access-xmrnd\") pod \"machine-config-operator-74547568cd-2mmnj\" (UID: \"94ac6b65-651b-4461-90de-62a5987b52e0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.246339 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:57 crc kubenswrapper[4711]: E1202 10:15:57.247110 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.747079167 +0000 UTC m=+147.456445614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.248379 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n54fh" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.266358 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vh2k\" (UniqueName: \"kubernetes.io/projected/a26a2431-9c9a-48e9-9797-bf9ac466fc2f-kube-api-access-4vh2k\") pod \"package-server-manager-789f6589d5-jcqjc\" (UID: \"a26a2431-9c9a-48e9-9797-bf9ac466fc2f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.269550 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.280391 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsw4g\" (UniqueName: \"kubernetes.io/projected/43cb1081-b058-41a4-a47c-e106aa0d2e41-kube-api-access-hsw4g\") pod \"catalog-operator-68c6474976-g9jnr\" (UID: \"43cb1081-b058-41a4-a47c-e106aa0d2e41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.286877 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.299053 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h68g\" (UniqueName: \"kubernetes.io/projected/b3bc47fe-80a3-4a18-bada-6eebb2abff12-kube-api-access-7h68g\") pod \"packageserver-d55dfcdfc-l54rr\" (UID: \"b3bc47fe-80a3-4a18-bada-6eebb2abff12\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.304356 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.323036 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7nqn\" (UniqueName: \"kubernetes.io/projected/f6b3df69-76ea-4424-8d50-b2646cf2cd0e-kube-api-access-l7nqn\") pod \"control-plane-machine-set-operator-78cbb6b69f-7kmjp\" (UID: \"f6b3df69-76ea-4424-8d50-b2646cf2cd0e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.329226 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.349518 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/140261a1-9b1a-4829-8926-6ddaaaf55503-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hvqhn\" (UID: \"140261a1-9b1a-4829-8926-6ddaaaf55503\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" Dec 02 10:15:57 crc kubenswrapper[4711]: E1202 10:15:57.353651 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.853614592 +0000 UTC m=+147.562981039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.354143 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.362472 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80ff1a4d-6503-4de6-b29a-d5fcfbefc729-bound-sa-token\") pod \"ingress-operator-5b745b69d9-st4hg\" (UID: \"80ff1a4d-6503-4de6-b29a-d5fcfbefc729\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.380654 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.387728 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w52lp\" (UniqueName: \"kubernetes.io/projected/47aad68f-ce63-48d4-8fff-59513ec9f7b6-kube-api-access-w52lp\") pod \"machine-config-server-qp5kk\" (UID: \"47aad68f-ce63-48d4-8fff-59513ec9f7b6\") " pod="openshift-machine-config-operator/machine-config-server-qp5kk" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.397541 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.403395 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq98b\" (UniqueName: \"kubernetes.io/projected/4d813713-0bc0-4865-a7a2-dcf1a15ae12f-kube-api-access-bq98b\") pod \"dns-default-qzmsv\" (UID: \"4d813713-0bc0-4865-a7a2-dcf1a15ae12f\") " pod="openshift-dns/dns-default-qzmsv" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.423255 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lz9s\" (UniqueName: \"kubernetes.io/projected/8f8ce2ea-2335-474b-97c9-1108e2157a2b-kube-api-access-4lz9s\") pod \"service-ca-9c57cc56f-n9qbk\" (UID: \"8f8ce2ea-2335-474b-97c9-1108e2157a2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.440203 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl9sx\" (UniqueName: \"kubernetes.io/projected/28ac9884-97cd-4efb-90f1-b742a5a9a519-kube-api-access-nl9sx\") pod \"machine-config-controller-84d6567774-z6sld\" (UID: \"28ac9884-97cd-4efb-90f1-b742a5a9a519\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.440489 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.455199 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.457611 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.457780 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp" Dec 02 10:15:57 crc kubenswrapper[4711]: E1202 10:15:57.458446 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:57.958419941 +0000 UTC m=+147.667786388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.458602 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4sbm\" (UniqueName: \"kubernetes.io/projected/eb441657-98d8-4fbe-9f7f-50c64f1414ba-kube-api-access-f4sbm\") pod \"migrator-59844c95c7-wrrqk\" (UID: \"eb441657-98d8-4fbe-9f7f-50c64f1414ba\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wrrqk" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.474327 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.481118 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.484147 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jctq\" (UniqueName: \"kubernetes.io/projected/8e8230b2-fb50-43c7-8a69-af1d02cce895-kube-api-access-6jctq\") pod \"marketplace-operator-79b997595-m5tws\" (UID: \"8e8230b2-fb50-43c7-8a69-af1d02cce895\") " pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.494968 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.521701 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c"] Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.522693 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4m2lb"] Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.526979 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvnf5\" (UniqueName: \"kubernetes.io/projected/80ff1a4d-6503-4de6-b29a-d5fcfbefc729-kube-api-access-jvnf5\") pod \"ingress-operator-5b745b69d9-st4hg\" (UID: \"80ff1a4d-6503-4de6-b29a-d5fcfbefc729\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.527818 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mskn\" (UniqueName: \"kubernetes.io/projected/5b69aaa1-6c16-4417-a703-281e816a8642-kube-api-access-2mskn\") pod \"service-ca-operator-777779d784-22ngp\" (UID: \"5b69aaa1-6c16-4417-a703-281e816a8642\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.534527 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.542000 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qzmsv" Dec 02 10:15:57 crc kubenswrapper[4711]: W1202 10:15:57.548490 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2367c432_3e23_436c_aff6_31e1c32f8809.slice/crio-66f3feb928ec05491d62dfb840a187635f128bd945367da8afbbd63419c68147 WatchSource:0}: Error finding container 66f3feb928ec05491d62dfb840a187635f128bd945367da8afbbd63419c68147: Status 404 returned error can't find the container with id 66f3feb928ec05491d62dfb840a187635f128bd945367da8afbbd63419c68147 Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.559752 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:57 crc kubenswrapper[4711]: E1202 10:15:57.560138 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.060122494 +0000 UTC m=+147.769488931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.575331 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qp5kk" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.658611 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b"] Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.665817 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:57 crc kubenswrapper[4711]: E1202 10:15:57.667041 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.167014839 +0000 UTC m=+147.876381286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.667222 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:57 crc kubenswrapper[4711]: E1202 10:15:57.667676 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.167665947 +0000 UTC m=+147.877032394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.667717 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jbwvc"] Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.702859 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.717689 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wrrqk" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.725581 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.772975 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:57 crc kubenswrapper[4711]: E1202 10:15:57.773376 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.273354489 +0000 UTC m=+147.982720936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.773655 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.804131 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.877617 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:57 crc kubenswrapper[4711]: E1202 10:15:57.882090 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.382043353 +0000 UTC m=+148.091409810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.886862 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g2lxx"] Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.892732 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn"] Dec 02 10:15:57 crc kubenswrapper[4711]: I1202 10:15:57.978131 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:57 crc kubenswrapper[4711]: E1202 10:15:57.978550 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.478533324 +0000 UTC m=+148.187899761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.080288 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:58 crc kubenswrapper[4711]: E1202 10:15:58.080675 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.580658989 +0000 UTC m=+148.290025436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.126257 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rc7wl"] Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.185669 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:58 crc kubenswrapper[4711]: E1202 10:15:58.186159 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.686139156 +0000 UTC m=+148.395505603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.280678 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" event={"ID":"90d228ed-836d-4316-b404-a281ae332a8a","Type":"ContainerStarted","Data":"a4cb151ca427bf26fb267b9134423e56d571aebea4fc5c888941d25812681bef"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.290388 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:58 crc kubenswrapper[4711]: E1202 10:15:58.296049 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.795994092 +0000 UTC m=+148.505360539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.309708 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" event={"ID":"2367c432-3e23-436c-aff6-31e1c32f8809","Type":"ContainerStarted","Data":"66f3feb928ec05491d62dfb840a187635f128bd945367da8afbbd63419c68147"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.319738 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4m2lb" event={"ID":"585fe769-d9ad-42f1-8cb6-29904018f637","Type":"ContainerStarted","Data":"d8a194b6b515ca73c688f0a503f81418190cc577d2c7d8fdbe8389994212d3e5"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.320329 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4m2lb" Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.324681 4711 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m2lb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.324766 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4m2lb" podUID="585fe769-d9ad-42f1-8cb6-29904018f637" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.333111 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" event={"ID":"fd1e0117-926f-4673-ac70-b25dc56e7403","Type":"ContainerStarted","Data":"878b889b0f8555c711705c77c277090814303c9f984d93a93e47aaeb9689acdf"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.334687 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zrj4j" event={"ID":"34333510-8dc2-45c2-9c08-013bdb2bcd85","Type":"ContainerStarted","Data":"65a06c634b73e1db9a680e47e90ac4753cca5e9c80dd774865b6511ad628ed06"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.348860 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" event={"ID":"72175ecc-cd0d-451c-a30d-59962898bec9","Type":"ContainerStarted","Data":"017d80bef8c44cae51cbd212ac8980e52f103365e017c09988976941ec643af0"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.349145 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.350542 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" event={"ID":"4d5f3363-25e9-4f5b-94ed-843a17d17997","Type":"ContainerStarted","Data":"6a2431a548e166818fb9bfc1af73b06950b45bd520026f9e31e533886c75473c"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.350582 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" event={"ID":"4d5f3363-25e9-4f5b-94ed-843a17d17997","Type":"ContainerStarted","Data":"6d1d59588828de7113e5fdd0efb289dc5d5444015d95a24e249069fa418b9548"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.356292 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qp5kk" event={"ID":"47aad68f-ce63-48d4-8fff-59513ec9f7b6","Type":"ContainerStarted","Data":"f3eeb4d0eaddc86592360116df69672da22d2742f45217f28cd6828b16080660"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.360288 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" event={"ID":"067e3491-3d3c-4bc6-a164-9093f895fbcf","Type":"ContainerStarted","Data":"15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.360499 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" event={"ID":"067e3491-3d3c-4bc6-a164-9093f895fbcf","Type":"ContainerStarted","Data":"93c0d0dc9efe58616709b80ec2372ebc85a9ef8f0de6d7b2af02f3f7be5915ef"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.361037 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.362360 4711 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-49njp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.362403 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" podUID="067e3491-3d3c-4bc6-a164-9093f895fbcf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.362902 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2lxx" event={"ID":"8c1f70ef-1183-4621-bb91-ffe2d31fa391","Type":"ContainerStarted","Data":"97450e7ec449ec2329c227862bd350501e922e62a43afd50d28d6e215254dff8"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.365619 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jbwvc" event={"ID":"ea1baa74-f09b-497d-a9df-d73953bf8a22","Type":"ContainerStarted","Data":"333e5e698437e462e7e91a15182105a81145ecb8bb13cc415d057e2c7786d574"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.384578 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" event={"ID":"53da86b0-43ce-4526-97db-a82df759ef58","Type":"ContainerStarted","Data":"afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.385093 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.385111 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" event={"ID":"53da86b0-43ce-4526-97db-a82df759ef58","Type":"ContainerStarted","Data":"2c0080af4d6aa7bdf23f1f658a3cf1a2ef1a42325d6758c23f66d2c4359d6bf6"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.386446 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" event={"ID":"98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919","Type":"ContainerStarted","Data":"90af9b777eb1028f74831041c3fa62add85fd3ba2e75dc27d785650a0d7d2434"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.387311 4711 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qjz2m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.387391 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" podUID="53da86b0-43ce-4526-97db-a82df759ef58" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.400791 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:58 crc kubenswrapper[4711]: E1202 10:15:58.401091 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.901062137 +0000 UTC m=+148.610428594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.401574 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:58 crc kubenswrapper[4711]: E1202 10:15:58.401973 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:58.901930281 +0000 UTC m=+148.611296808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.405426 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" event={"ID":"764e4272-f3e2-4a3f-a390-7929850c7150","Type":"ContainerStarted","Data":"6305a4e1791ed5eea12533515d0c0fe44a5106055e56691be19ff47ddf76490e"} Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.414163 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4m2lb" podStartSLOduration=127.414121723 podStartE2EDuration="2m7.414121723s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:58.37547418 +0000 UTC m=+148.084840647" watchObservedRunningTime="2025-12-02 10:15:58.414121723 +0000 UTC m=+148.123488170" Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.502663 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:58 crc kubenswrapper[4711]: E1202 10:15:58.504368 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.004344994 +0000 UTC m=+148.713711451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.608734 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:58 crc kubenswrapper[4711]: E1202 10:15:58.609194 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.109174763 +0000 UTC m=+148.818541210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.709300 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fsfqp" podStartSLOduration=127.709271732 podStartE2EDuration="2m7.709271732s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:58.70882085 +0000 UTC m=+148.418187297" watchObservedRunningTime="2025-12-02 10:15:58.709271732 +0000 UTC m=+148.418638179" Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.712298 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:58 crc kubenswrapper[4711]: E1202 10:15:58.712661 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.212640124 +0000 UTC m=+148.922006571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.743560 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-zrj4j" podStartSLOduration=127.743535257 podStartE2EDuration="2m7.743535257s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:58.742463708 +0000 UTC m=+148.451830165" watchObservedRunningTime="2025-12-02 10:15:58.743535257 +0000 UTC m=+148.452901704" Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.813307 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:58 crc kubenswrapper[4711]: E1202 10:15:58.813764 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.313745582 +0000 UTC m=+149.023112029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.865901 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vqg5c" podStartSLOduration=127.865874023 podStartE2EDuration="2m7.865874023s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:58.855909772 +0000 UTC m=+148.565276239" watchObservedRunningTime="2025-12-02 10:15:58.865874023 +0000 UTC m=+148.575240470" Dec 02 10:15:58 crc kubenswrapper[4711]: I1202 10:15:58.914262 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:58 crc kubenswrapper[4711]: E1202 10:15:58.914642 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.414622263 +0000 UTC m=+149.123988710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.015868 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.015985 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.016018 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:59 crc kubenswrapper[4711]: E1202 10:15:59.016548 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.516332596 +0000 UTC m=+149.225699043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.016820 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.034399 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.127598 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.128444 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:59 crc kubenswrapper[4711]: E1202 10:15:59.128545 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.628527056 +0000 UTC m=+149.337893503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.128793 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:59 crc kubenswrapper[4711]: E1202 10:15:59.129074 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.629063291 +0000 UTC m=+149.338429748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.135345 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:15:59 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:15:59 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:15:59 crc kubenswrapper[4711]: healthz check failed Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.135404 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.136405 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.230393 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:59 crc kubenswrapper[4711]: E1202 10:15:59.230711 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.730695262 +0000 UTC m=+149.440061709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.230908 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.230943 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.230983 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:59 crc kubenswrapper[4711]: E1202 10:15:59.231213 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.731205706 +0000 UTC m=+149.440572153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.238056 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.238579 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.262704 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2c7s8"] Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.283794 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n54fh"] Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.299317 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9hk72"] Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.354929 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:59 crc kubenswrapper[4711]: E1202 10:15:59.355209 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.855192987 +0000 UTC m=+149.564559434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.404108 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.438346 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.460446 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:59 crc kubenswrapper[4711]: E1202 10:15:59.460936 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:15:59.960904881 +0000 UTC m=+149.670271328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.497897 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" podStartSLOduration=128.497868068 podStartE2EDuration="2m8.497868068s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:59.457976971 +0000 UTC m=+149.167343428" watchObservedRunningTime="2025-12-02 10:15:59.497868068 +0000 UTC m=+149.207234525" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.525937 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" event={"ID":"a996dfa5-84ad-41e6-aee0-ed17df150b5b","Type":"ContainerStarted","Data":"7adec98ad2da3615bbfcacd1e0d75fbf9a8c5cb4a1f23bdfbcd95a0df729db9a"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.526021 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" event={"ID":"a996dfa5-84ad-41e6-aee0-ed17df150b5b","Type":"ContainerStarted","Data":"0fb9c67daeaad9e076a0a4926a757cd5a234bae891f8fcc4f0ca8513ae2d4fad"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.541521 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jbwvc" event={"ID":"ea1baa74-f09b-497d-a9df-d73953bf8a22","Type":"ContainerStarted","Data":"dfd51c548547dd0db19f3d4cbee02a9b7d0b9647a87e9da2c6ff4e425485e494"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.557874 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" podStartSLOduration=128.557852804 podStartE2EDuration="2m8.557852804s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:59.521582355 +0000 UTC m=+149.230948802" watchObservedRunningTime="2025-12-02 10:15:59.557852804 +0000 UTC m=+149.267219251" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.563199 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" event={"ID":"b9bbe773-63b6-490e-a058-a12050a40b4a","Type":"ContainerStarted","Data":"330113d90f4001b74a7c901120c386ae3cb308d9e70133acf3e34aa496f6bdcf"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.564833 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:59 crc kubenswrapper[4711]: E1202 10:15:59.565447 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:00.065403921 +0000 UTC m=+149.774770368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.608547 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2rvdj" podStartSLOduration=128.608530096 podStartE2EDuration="2m8.608530096s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:59.557799513 +0000 UTC m=+149.267165960" watchObservedRunningTime="2025-12-02 10:15:59.608530096 +0000 UTC m=+149.317896543" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.610758 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg"] Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.610793 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6gz75"] Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.623300 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm"] Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.628984 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" podStartSLOduration=128.628964434 podStartE2EDuration="2m8.628964434s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:59.6127187 +0000 UTC m=+149.322085147" watchObservedRunningTime="2025-12-02 10:15:59.628964434 +0000 UTC m=+149.338330881" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.630468 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4m2lb" event={"ID":"585fe769-d9ad-42f1-8cb6-29904018f637","Type":"ContainerStarted","Data":"0850da58a6931b901dde365811bc7d919c0d1c073fd3e83e237fa3ae97c80e0e"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.631629 4711 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m2lb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.631664 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4m2lb" podUID="585fe769-d9ad-42f1-8cb6-29904018f637" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.644700 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" event={"ID":"98f5ed0e-0ecd-4dd9-a3fb-8bb38ad68919","Type":"ContainerStarted","Data":"10cf839affe72b460edb832106d2379a42f69661aae189ab66cfe4f7af443ad7"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.654815 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" event={"ID":"4d5f3363-25e9-4f5b-94ed-843a17d17997","Type":"ContainerStarted","Data":"4f62d5ac6b080cf3d90ff28075f566a224048c31ec023aaba1df79ff837a046e"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.661718 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9hk72" event={"ID":"dbab21cc-3581-4de0-911c-6baad4d03087","Type":"ContainerStarted","Data":"503e56c11e4b22a2e3a2c5ba6d69652d69843ea20f8453e4d9d9bee1dfa0d211"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.683929 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rc7wl" podStartSLOduration=128.683909182 podStartE2EDuration="2m8.683909182s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:59.681332901 +0000 UTC m=+149.390699368" watchObservedRunningTime="2025-12-02 10:15:59.683909182 +0000 UTC m=+149.393275629" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.684840 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:59 crc kubenswrapper[4711]: E1202 10:15:59.706231 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:00.20621164 +0000 UTC m=+149.915578087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.709171 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc"] Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.718668 4711 generic.go:334] "Generic (PLEG): container finished" podID="fd1e0117-926f-4673-ac70-b25dc56e7403" containerID="02b37fc33f582e81f4d0dc3205303935668ac94a16c7eda57d0a91f935d91477" exitCode=0 Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.718753 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" event={"ID":"fd1e0117-926f-4673-ac70-b25dc56e7403","Type":"ContainerDied","Data":"02b37fc33f582e81f4d0dc3205303935668ac94a16c7eda57d0a91f935d91477"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.739567 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n54fh" event={"ID":"bf7b0fe2-c9dc-4486-926c-0432cfc03172","Type":"ContainerStarted","Data":"2cb86e883c165c1532449881049bc61fb9be46f4e68fe9f0d9ff3085bc3e3e08"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.769015 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6sr4n"] Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.771174 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" event={"ID":"2367c432-3e23-436c-aff6-31e1c32f8809","Type":"ContainerStarted","Data":"7522c5e0c25ea8ba8ac99cc6ef2677a790192b3f8928cbd6794c3667160cd976"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.775094 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qp5kk" event={"ID":"47aad68f-ce63-48d4-8fff-59513ec9f7b6","Type":"ContainerStarted","Data":"08bb582589d9e6070e7977905abb7c0a1cdb447fc6b547e5f9782bca847b0be8"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.787906 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:59 crc kubenswrapper[4711]: E1202 10:15:59.788552 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:00.288501345 +0000 UTC m=+149.997867792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.795637 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2lxx" event={"ID":"8c1f70ef-1183-4621-bb91-ffe2d31fa391","Type":"ContainerStarted","Data":"82f8131babe3018bc6848c4a50122f7ca74b1e010d2392d24cc4cad392b779f1"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.800504 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-phj8b" podStartSLOduration=128.80047897 podStartE2EDuration="2m8.80047897s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:59.762238738 +0000 UTC m=+149.471605185" watchObservedRunningTime="2025-12-02 10:15:59.80047897 +0000 UTC m=+149.509845417" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.819257 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5n6jf" podStartSLOduration=128.819225702 podStartE2EDuration="2m8.819225702s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:59.811413829 +0000 UTC m=+149.520780276" watchObservedRunningTime="2025-12-02 10:15:59.819225702 +0000 UTC m=+149.528592179" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.819590 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" event={"ID":"90d228ed-836d-4316-b404-a281ae332a8a","Type":"ContainerStarted","Data":"090bde7590b140fac108badc86dcd7c2cd707b715ca511f25f8365e2292e3392"} Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.819634 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng"] Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.836467 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qp5kk" podStartSLOduration=5.836444071 podStartE2EDuration="5.836444071s" podCreationTimestamp="2025-12-02 10:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:59.835554177 +0000 UTC m=+149.544920624" watchObservedRunningTime="2025-12-02 10:15:59.836444071 +0000 UTC m=+149.545810518" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.869387 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.874101 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.891627 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:15:59 crc kubenswrapper[4711]: E1202 10:15:59.898793 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:00.398777202 +0000 UTC m=+150.108143649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:15:59 crc kubenswrapper[4711]: W1202 10:15:59.967151 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84dbad5b_6e48_48a8_bbe1_76f6e92eb785.slice/crio-7cffe44303ae84964e12902f63621ada5bc83309b739f503a0f6f514910950f2 WatchSource:0}: Error finding container 7cffe44303ae84964e12902f63621ada5bc83309b739f503a0f6f514910950f2: Status 404 returned error can't find the container with id 7cffe44303ae84964e12902f63621ada5bc83309b739f503a0f6f514910950f2 Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.978437 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6cr5c" podStartSLOduration=128.978418593 podStartE2EDuration="2m8.978418593s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:15:59.963095886 +0000 UTC m=+149.672462353" watchObservedRunningTime="2025-12-02 10:15:59.978418593 +0000 UTC m=+149.687785040" Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.982285 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk"] Dec 02 10:15:59 crc kubenswrapper[4711]: I1202 10:15:59.997236 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:15:59 crc kubenswrapper[4711]: E1202 10:15:59.997689 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:00.497666478 +0000 UTC m=+150.207032925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:15:59.999123 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj"] Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.039225 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-g2lxx" podStartSLOduration=129.039196371 podStartE2EDuration="2m9.039196371s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:00.026269619 +0000 UTC m=+149.735636086" watchObservedRunningTime="2025-12-02 10:16:00.039196371 +0000 UTC m=+149.748562818" Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.071541 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n9qbk"] Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.128492 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:00 crc kubenswrapper[4711]: E1202 10:16:00.129505 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:00.629481923 +0000 UTC m=+150.338848370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.149573 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:00 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:00 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:00 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.149622 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.232944 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:00 crc kubenswrapper[4711]: E1202 10:16:00.233643 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:00.733616514 +0000 UTC m=+150.442982961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.338222 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" podStartSLOduration=130.338194455 podStartE2EDuration="2m10.338194455s" podCreationTimestamp="2025-12-02 10:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:00.214178573 +0000 UTC m=+149.923545040" watchObservedRunningTime="2025-12-02 10:16:00.338194455 +0000 UTC m=+150.047560902" Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.350038 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mrbgr"] Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.350649 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.367027 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr"] Dec 02 10:16:00 crc kubenswrapper[4711]: E1202 10:16:00.367509 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:00.867474023 +0000 UTC m=+150.576840470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.381521 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld"] Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.395970 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr"] Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.420126 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn"] Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.421483 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx"] Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.437994 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-22ngp"] Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.457734 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg"] Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.457784 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wrrqk"] Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.457795 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qzmsv"] Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.461755 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp"] Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.462379 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.462378 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m5tws"] Dec 02 10:16:00 crc kubenswrapper[4711]: E1202 10:16:00.462678 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:00.962661759 +0000 UTC m=+150.672028206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.563711 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:00 crc kubenswrapper[4711]: E1202 10:16:00.565015 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:01.06499185 +0000 UTC m=+150.774358297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.668828 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:00 crc kubenswrapper[4711]: E1202 10:16:00.669257 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:01.169235123 +0000 UTC m=+150.878601570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.769916 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:00 crc kubenswrapper[4711]: E1202 10:16:00.770345 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:01.27032755 +0000 UTC m=+150.979693997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.829680 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" event={"ID":"28ac9884-97cd-4efb-90f1-b742a5a9a519","Type":"ContainerStarted","Data":"a9066c2a0956053f042f2c86675aba90e1c5b0735bd988d1288502822ba77db7"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.843928 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jbwvc" event={"ID":"ea1baa74-f09b-497d-a9df-d73953bf8a22","Type":"ContainerStarted","Data":"a9f25723b8813e501230a69b75a4e7725749a9f2fed4c8eacc22410e301b62eb"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.856565 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wrrqk" event={"ID":"eb441657-98d8-4fbe-9f7f-50c64f1414ba","Type":"ContainerStarted","Data":"5f2c8ce0bb4651824c0a0afecc89d49653e6cedc2efaf8e0610618745de90ed9"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.863334 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" event={"ID":"8f8ce2ea-2335-474b-97c9-1108e2157a2b","Type":"ContainerStarted","Data":"7d0b635175d63c689bd793d4e30d5171699210a103d602a83e34a5c3961b930a"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.871341 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:00 crc kubenswrapper[4711]: E1202 10:16:00.871584 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:01.37156044 +0000 UTC m=+151.080926887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.871630 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:00 crc kubenswrapper[4711]: E1202 10:16:00.872714 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:01.372667851 +0000 UTC m=+151.082034298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.873106 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" event={"ID":"d4dfbfed-09bb-4b36-bcfb-326077388f98","Type":"ContainerStarted","Data":"f272413ff5b5d57994f900b511bcab43d9b62b723b94758d7a568228b23c6b40"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.874852 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" event={"ID":"8e8230b2-fb50-43c7-8a69-af1d02cce895","Type":"ContainerStarted","Data":"3b56e7ae646cc9a25c9d018cfdd203cb17225d317d76da294070a62bab2b51fa"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.876921 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" event={"ID":"dadea7f6-d37c-458a-9f16-4d3477f4b6f3","Type":"ContainerStarted","Data":"34106b0abbf4a0f1a237de4d2e5835e88b5a0969fd932b1eac546edc5b3b0c3f"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.877771 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" event={"ID":"140261a1-9b1a-4829-8926-6ddaaaf55503","Type":"ContainerStarted","Data":"6950f96e0628e3154480417a92141cfec6dc63a5c7b4ab5d01c36600bf7df10c"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.880942 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qzmsv" event={"ID":"4d813713-0bc0-4865-a7a2-dcf1a15ae12f","Type":"ContainerStarted","Data":"be2ebae5b6597f9daa6f8ead1cddee21efed7bbd5283597e6733b2a68e56b4a6"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.885163 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" event={"ID":"334bed7f-2766-4ef7-9eee-9312c1453bbb","Type":"ContainerStarted","Data":"d87ae60260bbf5ffdfcacce4103d4e49ddbcd52fc137085dd457a7e5b8611078"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.907585 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n54fh" event={"ID":"bf7b0fe2-c9dc-4486-926c-0432cfc03172","Type":"ContainerStarted","Data":"17c4d0991e6a8038526d1295cf7ce6fcb58d20b5b979c6dcd55c6ec25475145f"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.920557 4711 generic.go:334] "Generic (PLEG): container finished" podID="b9bbe773-63b6-490e-a058-a12050a40b4a" containerID="ad28596d87308c0f77f3e58c68f8e6703d28857a0a1bdc075d92d7d3664bb72e" exitCode=0 Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.920641 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" event={"ID":"b9bbe773-63b6-490e-a058-a12050a40b4a","Type":"ContainerDied","Data":"ad28596d87308c0f77f3e58c68f8e6703d28857a0a1bdc075d92d7d3664bb72e"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.925682 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-n54fh" podStartSLOduration=7.9256660960000005 podStartE2EDuration="7.925666096s" podCreationTimestamp="2025-12-02 10:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:00.924401002 +0000 UTC m=+150.633767449" watchObservedRunningTime="2025-12-02 10:16:00.925666096 +0000 UTC m=+150.635032543" Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.926717 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" event={"ID":"2e365db0-c1ec-415c-8310-02e222ac80c1","Type":"ContainerStarted","Data":"063a6cd656091c3a947ffbd20bd17b3cddcde85c42b486fbbe14a7173057de26"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.926771 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" event={"ID":"2e365db0-c1ec-415c-8310-02e222ac80c1","Type":"ContainerStarted","Data":"6e1a43cca4ba67ec6a05c6c48cdb0affbae74762aea86ca4689f88ec1f95eb96"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.935165 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-jbwvc" podStartSLOduration=129.935151755 podStartE2EDuration="2m9.935151755s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:00.872348082 +0000 UTC m=+150.581714549" watchObservedRunningTime="2025-12-02 10:16:00.935151755 +0000 UTC m=+150.644518202" Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.937798 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" event={"ID":"80d9d16b-8c37-4d97-8179-12909d9c2f53","Type":"ContainerStarted","Data":"0541e0166622be6702f40d24f91e0bdae5a42f4d77f237614085c01f01029eba"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.937849 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" event={"ID":"80d9d16b-8c37-4d97-8179-12909d9c2f53","Type":"ContainerStarted","Data":"4af01d49238032d768138671c19d7071baee34fc2fa5479cb989d89fbc171447"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.938827 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.939915 4711 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-47pfg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.939967 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" podUID="80d9d16b-8c37-4d97-8179-12909d9c2f53" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.958448 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" event={"ID":"94ac6b65-651b-4461-90de-62a5987b52e0","Type":"ContainerStarted","Data":"7aaeca3d529eb31be6748bdfeee7289461e716bced1c9d431d4a8d1b9ecbf143"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.962405 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9hk72" event={"ID":"dbab21cc-3581-4de0-911c-6baad4d03087","Type":"ContainerStarted","Data":"95863b04f9884ddf9100fbfbf9de50e90d86f16ece0977d39b22fd10e1aed577"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.972740 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" event={"ID":"5b69aaa1-6c16-4417-a703-281e816a8642","Type":"ContainerStarted","Data":"8a0e0cb9fad2a3a85bf3b2951f41e98a43683fc68b45cc080a5ecf2f33d8bdaa"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.977727 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e407c89eea154601509f1612781aa31e42ae25924a3c73aa465a3cab8c977492"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.979340 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" event={"ID":"84dbad5b-6e48-48a8-bbe1-76f6e92eb785","Type":"ContainerStarted","Data":"7cffe44303ae84964e12902f63621ada5bc83309b739f503a0f6f514910950f2"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.980360 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" event={"ID":"b3bc47fe-80a3-4a18-bada-6eebb2abff12","Type":"ContainerStarted","Data":"2f4662fe098f00e384c96a8886035c0ffa9ff2a897debf7dbf809e416efdd5f7"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.988906 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:00 crc kubenswrapper[4711]: E1202 10:16:00.990030 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:01.490009571 +0000 UTC m=+151.199376018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.990124 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6gz75" event={"ID":"91034075-af22-4fae-8684-a8914596c1ac","Type":"ContainerStarted","Data":"98d7479a745b9a6ede337f3f2ca7a255b8726b06154e754a9eba4b0a016f9b3e"} Dec 02 10:16:00 crc kubenswrapper[4711]: I1202 10:16:00.990170 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6gz75" event={"ID":"91034075-af22-4fae-8684-a8914596c1ac","Type":"ContainerStarted","Data":"6d0ac247605f272363b1cfcfda1158f657ae222427edea98d72399e28abb9c80"} Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:00.994939 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.007699 4711 patch_prober.go:28] interesting pod/console-operator-58897d9998-6gz75 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.007766 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6gz75" podUID="91034075-af22-4fae-8684-a8914596c1ac" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.015500 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" podStartSLOduration=130.015478225 podStartE2EDuration="2m10.015478225s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:01.014241802 +0000 UTC m=+150.723608249" watchObservedRunningTime="2025-12-02 10:16:01.015478225 +0000 UTC m=+150.724844672" Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.025403 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" event={"ID":"a26a2431-9c9a-48e9-9797-bf9ac466fc2f","Type":"ContainerStarted","Data":"d1e7a9e39acb2fd92509c2b9d30ede3cb6fd92034a3dd93993023e69125940d4"} Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.025446 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" event={"ID":"a26a2431-9c9a-48e9-9797-bf9ac466fc2f","Type":"ContainerStarted","Data":"36949c85d61b351aa95636d8bf5b04a768776173156eaf1b20c587aa5528c480"} Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.029899 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp" event={"ID":"f6b3df69-76ea-4424-8d50-b2646cf2cd0e","Type":"ContainerStarted","Data":"c4d092a854a2da18dbf627d9189fcb50586f89d819a1b64238814d7a23064d56"} Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.043941 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wpzpm" podStartSLOduration=130.043922551 podStartE2EDuration="2m10.043922551s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:01.042375769 +0000 UTC m=+150.751742216" watchObservedRunningTime="2025-12-02 10:16:01.043922551 +0000 UTC m=+150.753288998" Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.047857 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" event={"ID":"43cb1081-b058-41a4-a47c-e106aa0d2e41","Type":"ContainerStarted","Data":"b7fd38c7f3f470075463e1e45cdfb4bab3f0ad92652d658edc7e85002563edb9"} Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.054135 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xtx7c" event={"ID":"90d228ed-836d-4316-b404-a281ae332a8a","Type":"ContainerStarted","Data":"ac2146700338fc1e576d69942a35cffa4464e8e52ee066c344c1a161f9abc11c"} Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.059209 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" event={"ID":"80ff1a4d-6503-4de6-b29a-d5fcfbefc729","Type":"ContainerStarted","Data":"5d0b304c855965664fce13692eacc2e7cc225c4eb54bc3ae3d4df82c4ef21c43"} Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.060482 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" event={"ID":"0e0f1361-ab19-4762-9e5d-69d42bef5fb0","Type":"ContainerStarted","Data":"ef929972da2ad1d3e3dab225a174b1a0e14aa82051c99e24eed326eeebad5804"} Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.072118 4711 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m2lb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.072181 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4m2lb" podUID="585fe769-d9ad-42f1-8cb6-29904018f637" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.091186 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:01 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:01 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:01 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.091249 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.091656 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:01 crc kubenswrapper[4711]: E1202 10:16:01.185097 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:01.68505236 +0000 UTC m=+151.394418827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.195042 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:01 crc kubenswrapper[4711]: E1202 10:16:01.195259 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:01.695230178 +0000 UTC m=+151.404596635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.195597 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:01 crc kubenswrapper[4711]: E1202 10:16:01.196123 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:01.696109772 +0000 UTC m=+151.405476209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.237271 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6gz75" podStartSLOduration=130.237248934 podStartE2EDuration="2m10.237248934s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:01.073372565 +0000 UTC m=+150.782739012" watchObservedRunningTime="2025-12-02 10:16:01.237248934 +0000 UTC m=+150.946615381" Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.299480 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:01 crc kubenswrapper[4711]: E1202 10:16:01.299892 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:01.799872261 +0000 UTC m=+151.509238708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.400693 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:01 crc kubenswrapper[4711]: E1202 10:16:01.401102 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:01.901082992 +0000 UTC m=+151.610449439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.503791 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:01 crc kubenswrapper[4711]: E1202 10:16:01.504561 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:02.004534922 +0000 UTC m=+151.713901389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.605544 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:01 crc kubenswrapper[4711]: E1202 10:16:01.605877 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:02.105862216 +0000 UTC m=+151.815228663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.730547 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:01 crc kubenswrapper[4711]: E1202 10:16:01.730871 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:02.230854345 +0000 UTC m=+151.940220792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.832417 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:01 crc kubenswrapper[4711]: E1202 10:16:01.833232 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:02.333188025 +0000 UTC m=+152.042554482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.840773 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bs6xm" Dec 02 10:16:01 crc kubenswrapper[4711]: I1202 10:16:01.933474 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:01 crc kubenswrapper[4711]: E1202 10:16:01.934157 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:02.434131628 +0000 UTC m=+152.143498075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.048732 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:02 crc kubenswrapper[4711]: E1202 10:16:02.049406 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:02.549387662 +0000 UTC m=+152.258754109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.084840 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:02 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:02 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:02 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.085009 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.105342 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" event={"ID":"28ac9884-97cd-4efb-90f1-b742a5a9a519","Type":"ContainerStarted","Data":"2227d49caa396c0f8bc5da2a6e36f07d8d4e98814d1c70a194613a2e84ea590a"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.150394 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:02 crc kubenswrapper[4711]: E1202 10:16:02.150686 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:02.650670834 +0000 UTC m=+152.360037281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.166336 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" event={"ID":"0e0f1361-ab19-4762-9e5d-69d42bef5fb0","Type":"ContainerStarted","Data":"20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.167312 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.169599 4711 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6sr4n container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.169643 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" podUID="0e0f1361-ab19-4762-9e5d-69d42bef5fb0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.236148 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.242215 4711 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-l54rr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.242276 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" podUID="b3bc47fe-80a3-4a18-bada-6eebb2abff12" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.288766 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:02 crc kubenswrapper[4711]: E1202 10:16:02.291505 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:02.791486165 +0000 UTC m=+152.500852612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.303620 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" event={"ID":"94ac6b65-651b-4461-90de-62a5987b52e0","Type":"ContainerStarted","Data":"300fc28625783fab2e4dcc561b2da0aa9ede706b88fc4414530ae0acf778615c"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.345105 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" event={"ID":"fd1e0117-926f-4673-ac70-b25dc56e7403","Type":"ContainerStarted","Data":"ceb09dabf4c5921b9f447ad8f0f3eee48057429c191aec483ca4834ff3d1c5f5"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.367465 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" event={"ID":"5b69aaa1-6c16-4417-a703-281e816a8642","Type":"ContainerStarted","Data":"1ea1b7ea231be3d44b485ac4288fe2ee4c8dbb0120c6e427250f03b8e0b4b673"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.371032 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" event={"ID":"334bed7f-2766-4ef7-9eee-9312c1453bbb","Type":"ContainerStarted","Data":"78ed42fc5e702b2baf3d5cbfec06499bf24789f3a6b86c22f844ecaa226b9442"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.395568 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:02 crc kubenswrapper[4711]: E1202 10:16:02.396080 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:02.896048836 +0000 UTC m=+152.605415293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.441155 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" event={"ID":"8f8ce2ea-2335-474b-97c9-1108e2157a2b","Type":"ContainerStarted","Data":"bf1d31695694d79af63fc6edc9f8aeb44f8006b3e44459464f02149c09882c3d"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.448695 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2518cafc04b07ca4a892a401b6d32788bcca2b58566511de6762f5952f595dbe"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.483536 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4b9e274afa9c0486949b9c725569f1053833a1b23cdc102fcca50fe5a78a3e55"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.507183 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:02 crc kubenswrapper[4711]: E1202 10:16:02.510261 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.010227399 +0000 UTC m=+152.719593846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.539969 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"56c24bd17136484fd8387ce8767b1f282fea89203418e84b93a478ea75c12952"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.568476 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" event={"ID":"84dbad5b-6e48-48a8-bbe1-76f6e92eb785","Type":"ContainerStarted","Data":"570d7a98c3d76ef9a0ed0a24d03e45cdf052691099142369ff66aa174405ad7f"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.613470 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:02 crc kubenswrapper[4711]: E1202 10:16:02.613745 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.113724032 +0000 UTC m=+152.823090489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.613969 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:02 crc kubenswrapper[4711]: E1202 10:16:02.614443 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.114419881 +0000 UTC m=+152.823786398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.616762 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" event={"ID":"80ff1a4d-6503-4de6-b29a-d5fcfbefc729","Type":"ContainerStarted","Data":"d82e3b44f285376c1b60bbe1078948616236547cb24c34a372ced8c81bd9e74b"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.646022 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" event={"ID":"43cb1081-b058-41a4-a47c-e106aa0d2e41","Type":"ContainerStarted","Data":"b374a82fe312414ddcc42a0a4eb643d3c5c4bb765f93699e99684b7717ff6b6e"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.647048 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.648725 4711 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-g9jnr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.648786 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" podUID="43cb1081-b058-41a4-a47c-e106aa0d2e41" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.674964 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" event={"ID":"dadea7f6-d37c-458a-9f16-4d3477f4b6f3","Type":"ContainerStarted","Data":"3e6828856a20944f1eea57cc8e35332fbc03f0656edc6836cbdaa30c684b4a75"} Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.696414 4711 patch_prober.go:28] interesting pod/console-operator-58897d9998-6gz75 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.696479 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6gz75" podUID="91034075-af22-4fae-8684-a8914596c1ac" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.696614 4711 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-47pfg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.696642 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" podUID="80d9d16b-8c37-4d97-8179-12909d9c2f53" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.714836 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:02 crc kubenswrapper[4711]: E1202 10:16:02.715789 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.215770695 +0000 UTC m=+152.925137142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.732363 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" podStartSLOduration=131.732341377 podStartE2EDuration="2m11.732341377s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:02.593568102 +0000 UTC m=+152.302934559" watchObservedRunningTime="2025-12-02 10:16:02.732341377 +0000 UTC m=+152.441707824" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.736878 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" podStartSLOduration=131.73685259 podStartE2EDuration="2m11.73685259s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:02.730015434 +0000 UTC m=+152.439381901" watchObservedRunningTime="2025-12-02 10:16:02.73685259 +0000 UTC m=+152.446219037" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.821702 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:02 crc kubenswrapper[4711]: E1202 10:16:02.823107 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.323090842 +0000 UTC m=+153.032457279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.831654 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9hk72" podStartSLOduration=131.831629664 podStartE2EDuration="2m11.831629664s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:02.78855655 +0000 UTC m=+152.497922997" watchObservedRunningTime="2025-12-02 10:16:02.831629664 +0000 UTC m=+152.540996111" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.895709 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-n9qbk" podStartSLOduration=131.895684302 podStartE2EDuration="2m11.895684302s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:02.830490413 +0000 UTC m=+152.539856860" watchObservedRunningTime="2025-12-02 10:16:02.895684302 +0000 UTC m=+152.605050749" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.923777 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:02 crc kubenswrapper[4711]: E1202 10:16:02.924091 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.424052305 +0000 UTC m=+153.133418762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.978084 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5lktx" podStartSLOduration=131.978059108 podStartE2EDuration="2m11.978059108s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:02.908598403 +0000 UTC m=+152.617964850" watchObservedRunningTime="2025-12-02 10:16:02.978059108 +0000 UTC m=+152.687425555" Dec 02 10:16:02 crc kubenswrapper[4711]: I1202 10:16:02.978540 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9ptzk" podStartSLOduration=131.978534741 podStartE2EDuration="2m11.978534741s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:02.976435034 +0000 UTC m=+152.685801481" watchObservedRunningTime="2025-12-02 10:16:02.978534741 +0000 UTC m=+152.687901198" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.027485 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.027763 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.527750303 +0000 UTC m=+153.237116750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.083202 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:03 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:03 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:03 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.083477 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.084230 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22ngp" podStartSLOduration=132.084218693 podStartE2EDuration="2m12.084218693s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:03.040777019 +0000 UTC m=+152.750143466" watchObservedRunningTime="2025-12-02 10:16:03.084218693 +0000 UTC m=+152.793585140" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.133870 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.134118 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.634086503 +0000 UTC m=+153.343452950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.134254 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.134646 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.634629848 +0000 UTC m=+153.343996295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.199215 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" podStartSLOduration=63.199188639 podStartE2EDuration="1m3.199188639s" podCreationTimestamp="2025-12-02 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:03.118343864 +0000 UTC m=+152.827710311" watchObservedRunningTime="2025-12-02 10:16:03.199188639 +0000 UTC m=+152.908555096" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.244631 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.244913 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.744897344 +0000 UTC m=+153.454263791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.246799 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" podStartSLOduration=132.246778526 podStartE2EDuration="2m12.246778526s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:03.24436126 +0000 UTC m=+152.953727697" watchObservedRunningTime="2025-12-02 10:16:03.246778526 +0000 UTC m=+152.956144973" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.247301 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" podStartSLOduration=132.24729463 podStartE2EDuration="2m12.24729463s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:03.200217747 +0000 UTC m=+152.909584184" watchObservedRunningTime="2025-12-02 10:16:03.24729463 +0000 UTC m=+152.956661077" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.346053 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.346579 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.846565617 +0000 UTC m=+153.555932064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.460861 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.960836194 +0000 UTC m=+153.670202631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.461343 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.461648 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.462069 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:03.962050587 +0000 UTC m=+153.671417034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.562350 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.562723 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:04.062704361 +0000 UTC m=+153.772070808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.664139 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.664556 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:04.164536519 +0000 UTC m=+153.873902976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.709976 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" event={"ID":"8e8230b2-fb50-43c7-8a69-af1d02cce895","Type":"ContainerStarted","Data":"bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.710921 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.713450 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" event={"ID":"a26a2431-9c9a-48e9-9797-bf9ac466fc2f","Type":"ContainerStarted","Data":"ca4de28135182bfdc5165d06a8a5924cac37781174073dcf4db3f44fd8e6192c"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.713565 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.722936 4711 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m5tws container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.723021 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" podUID="8e8230b2-fb50-43c7-8a69-af1d02cce895" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.743368 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp" event={"ID":"f6b3df69-76ea-4424-8d50-b2646cf2cd0e","Type":"ContainerStarted","Data":"96c4bac82d288a35d85451115c80140b5c4d08613442e6e216ecc4574bab626a"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.748202 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wrrqk" event={"ID":"eb441657-98d8-4fbe-9f7f-50c64f1414ba","Type":"ContainerStarted","Data":"60288caa51593d8180f3c94390aa1b0e5d002a6f45e25b1305d7cb534cd57096"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.748245 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wrrqk" event={"ID":"eb441657-98d8-4fbe-9f7f-50c64f1414ba","Type":"ContainerStarted","Data":"6550fb1192e5caf32dd9a27de8b83b73b8d93a30734b72a83eaa25552a0ce23c"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.750254 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" event={"ID":"28ac9884-97cd-4efb-90f1-b742a5a9a519","Type":"ContainerStarted","Data":"2f48c7b0a64bf42a151ec20652b1274a3bfeeff897b67b0949fa65764c1d35e2"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.753448 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" event={"ID":"80ff1a4d-6503-4de6-b29a-d5fcfbefc729","Type":"ContainerStarted","Data":"a2352407840ae7224fce943755da8dc7ecb76ef56f4372a5626d197770ba8e76"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.754397 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" event={"ID":"d4dfbfed-09bb-4b36-bcfb-326077388f98","Type":"ContainerStarted","Data":"898f198990c3ea4bb6625f1a17819ffbb324b1fdf7163970dcf928ed5e0908d6"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.758972 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" event={"ID":"b3bc47fe-80a3-4a18-bada-6eebb2abff12","Type":"ContainerStarted","Data":"133fec8112012c99ccdad4ed24e695c7a0fb26b7dcf82533c7b490838fb305c5"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.759488 4711 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-l54rr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.759536 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" podUID="b3bc47fe-80a3-4a18-bada-6eebb2abff12" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.762685 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9hk72" event={"ID":"dbab21cc-3581-4de0-911c-6baad4d03087","Type":"ContainerStarted","Data":"f9b5c7f5f87429e0281dea630c350263ff587742bb82995ea8903109ec32e688"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.768472 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.768593 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:04.268566296 +0000 UTC m=+153.977932743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.768890 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.769620 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e9e4c7bc2bc1d9967d52a3c48a43315f018023896994f847fa25875c2eaae67a"} Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.770569 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:04.27055723 +0000 UTC m=+153.979923767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.775543 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" event={"ID":"140261a1-9b1a-4829-8926-6ddaaaf55503","Type":"ContainerStarted","Data":"13d9dd1ec56f5c5de6b3b56fea7034a7cfe24bf91b56f33e23180a0636a02af1"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.778351 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4b72eeb78cc8fe05290f8f82dbe851a3cddd443021d792b0a4e9b28de3e708d7"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.778704 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.790061 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" event={"ID":"b9bbe773-63b6-490e-a058-a12050a40b4a","Type":"ContainerStarted","Data":"70d1c9c537c239f4bf105a1eda3569efece64ed54f3cda5f632d8edc3a9eba35"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.791831 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" event={"ID":"94ac6b65-651b-4461-90de-62a5987b52e0","Type":"ContainerStarted","Data":"114629d340de6380c86364ce3e978a43b3b4b3027b61b1e5bc4ba4e6b263b41a"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.798252 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qzmsv" event={"ID":"4d813713-0bc0-4865-a7a2-dcf1a15ae12f","Type":"ContainerStarted","Data":"3f9e61c9ca3fc9cac9a1b02e62f624fb3153b18ac0cd67741b35de706a52e279"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.798288 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qzmsv" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.798297 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qzmsv" event={"ID":"4d813713-0bc0-4865-a7a2-dcf1a15ae12f","Type":"ContainerStarted","Data":"27740c31be37b6144714f6806ef3a2b463635aff9d9c39d2389a0ccfac3d4777"} Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.816637 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" podStartSLOduration=132.816618907 podStartE2EDuration="2m12.816618907s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:03.813909733 +0000 UTC m=+153.523276190" watchObservedRunningTime="2025-12-02 10:16:03.816618907 +0000 UTC m=+153.525985354" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.841557 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g9jnr" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.870645 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.871265 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:04.371246136 +0000 UTC m=+154.080612583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.944273 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qzmsv" podStartSLOduration=10.944199596 podStartE2EDuration="10.944199596s" podCreationTimestamp="2025-12-02 10:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:03.892449505 +0000 UTC m=+153.601815952" watchObservedRunningTime="2025-12-02 10:16:03.944199596 +0000 UTC m=+153.653566043" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.947730 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.957512 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.961224 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.961564 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.963219 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.966316 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z6sld" podStartSLOduration=132.966296889 podStartE2EDuration="2m12.966296889s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:03.964475218 +0000 UTC m=+153.673841675" watchObservedRunningTime="2025-12-02 10:16:03.966296889 +0000 UTC m=+153.675663336" Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.972821 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:03 crc kubenswrapper[4711]: E1202 10:16:03.978498 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:04.47846911 +0000 UTC m=+154.187835557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:03 crc kubenswrapper[4711]: I1202 10:16:03.987219 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-47pfg" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.000339 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-st4hg" podStartSLOduration=133.000296256 podStartE2EDuration="2m13.000296256s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:03.997936281 +0000 UTC m=+153.707302728" watchObservedRunningTime="2025-12-02 10:16:04.000296256 +0000 UTC m=+153.709662703" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.047373 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hvqhn" podStartSLOduration=133.047357169 podStartE2EDuration="2m13.047357169s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:04.046054624 +0000 UTC m=+153.755421071" watchObservedRunningTime="2025-12-02 10:16:04.047357169 +0000 UTC m=+153.756723616" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.048829 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" podStartSLOduration=133.048819839 podStartE2EDuration="2m13.048819839s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:04.031285231 +0000 UTC m=+153.740651678" watchObservedRunningTime="2025-12-02 10:16:04.048819839 +0000 UTC m=+153.758186286" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.076544 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.076749 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64a42c54-5124-490a-8f2c-5b39d57053a5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"64a42c54-5124-490a-8f2c-5b39d57053a5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.076796 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64a42c54-5124-490a-8f2c-5b39d57053a5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"64a42c54-5124-490a-8f2c-5b39d57053a5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.077903 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:04.577871231 +0000 UTC m=+154.287237678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.082076 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:04 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:04 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:04 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.082543 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.121994 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7kmjp" podStartSLOduration=133.121969104 podStartE2EDuration="2m13.121969104s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:04.076291878 +0000 UTC m=+153.785658325" watchObservedRunningTime="2025-12-02 10:16:04.121969104 +0000 UTC m=+153.831335551" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.124823 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wrrqk" podStartSLOduration=133.124814941 podStartE2EDuration="2m13.124814941s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:04.110851401 +0000 UTC m=+153.820217858" watchObservedRunningTime="2025-12-02 10:16:04.124814941 +0000 UTC m=+153.834181388" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.177935 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64a42c54-5124-490a-8f2c-5b39d57053a5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"64a42c54-5124-490a-8f2c-5b39d57053a5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.178288 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64a42c54-5124-490a-8f2c-5b39d57053a5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"64a42c54-5124-490a-8f2c-5b39d57053a5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.178400 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.178454 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64a42c54-5124-490a-8f2c-5b39d57053a5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"64a42c54-5124-490a-8f2c-5b39d57053a5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.178838 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:04.678822904 +0000 UTC m=+154.388189351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.209833 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64a42c54-5124-490a-8f2c-5b39d57053a5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"64a42c54-5124-490a-8f2c-5b39d57053a5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.244373 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2mmnj" podStartSLOduration=133.244349301 podStartE2EDuration="2m13.244349301s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:04.243674423 +0000 UTC m=+153.953040870" watchObservedRunningTime="2025-12-02 10:16:04.244349301 +0000 UTC m=+153.953715748" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.275602 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.279448 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.279696 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:04.779672604 +0000 UTC m=+154.489039051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.279784 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.280303 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:04.780281461 +0000 UTC m=+154.489647908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.304246 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6gz75" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.382529 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.383122 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:04.883087574 +0000 UTC m=+154.592454021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.484330 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.484736 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:04.984715096 +0000 UTC m=+154.694081553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.585173 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.585356 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:05.08532289 +0000 UTC m=+154.794689327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.585462 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.585805 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:05.085797453 +0000 UTC m=+154.795163900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.645168 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.685000 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.686516 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.686722 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:05.186688034 +0000 UTC m=+154.896054471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.686970 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.687544 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:05.187481536 +0000 UTC m=+154.896847983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.789534 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.790591 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:05.290572137 +0000 UTC m=+154.999938584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.891975 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.892363 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:05.392346603 +0000 UTC m=+155.101713050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.911565 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" event={"ID":"b9bbe773-63b6-490e-a058-a12050a40b4a","Type":"ContainerStarted","Data":"66b2144d1a002967a62714a8558973b47be918218e9383e4ca036034f507d19e"} Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.922218 4711 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m5tws container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.922265 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" podUID="8e8230b2-fb50-43c7-8a69-af1d02cce895" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.970136 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" podStartSLOduration=133.970103124 podStartE2EDuration="2m13.970103124s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:04.969098716 +0000 UTC m=+154.678465163" watchObservedRunningTime="2025-12-02 10:16:04.970103124 +0000 UTC m=+154.679469571" Dec 02 10:16:04 crc kubenswrapper[4711]: I1202 10:16:04.993651 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:04 crc kubenswrapper[4711]: E1202 10:16:04.994870 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:05.494849238 +0000 UTC m=+155.204215685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.096936 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:05 crc kubenswrapper[4711]: E1202 10:16:05.100337 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:05.600320735 +0000 UTC m=+155.309687182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.103145 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:05 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:05 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:05 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.103178 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.198083 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:05 crc kubenswrapper[4711]: E1202 10:16:05.198488 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:05.698469701 +0000 UTC m=+155.407836158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.274516 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.299345 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:05 crc kubenswrapper[4711]: E1202 10:16:05.299695 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:05.799682131 +0000 UTC m=+155.509048568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:05 crc kubenswrapper[4711]: W1202 10:16:05.315362 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod64a42c54_5124_490a_8f2c_5b39d57053a5.slice/crio-f0be26251dd9bd3123c874c1a6febc7bb5dd218abfd3103aab9d69deb1ed8e2d WatchSource:0}: Error finding container f0be26251dd9bd3123c874c1a6febc7bb5dd218abfd3103aab9d69deb1ed8e2d: Status 404 returned error can't find the container with id f0be26251dd9bd3123c874c1a6febc7bb5dd218abfd3103aab9d69deb1ed8e2d Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.402206 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:05 crc kubenswrapper[4711]: E1202 10:16:05.402568 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:05.902548046 +0000 UTC m=+155.611914493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.503657 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:05 crc kubenswrapper[4711]: E1202 10:16:05.504033 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:06.004019324 +0000 UTC m=+155.713385771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.604509 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:05 crc kubenswrapper[4711]: E1202 10:16:05.605128 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:06.10510407 +0000 UTC m=+155.814470517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.706034 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:05 crc kubenswrapper[4711]: E1202 10:16:05.706632 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:06.206615879 +0000 UTC m=+155.915982326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.817895 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:05 crc kubenswrapper[4711]: E1202 10:16:05.818600 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:06.318570412 +0000 UTC m=+156.027936859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.919575 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:05 crc kubenswrapper[4711]: E1202 10:16:05.920129 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:06.420110481 +0000 UTC m=+156.129476918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.922137 4711 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-l54rr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.922226 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" podUID="b3bc47fe-80a3-4a18-bada-6eebb2abff12" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.950460 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"64a42c54-5124-490a-8f2c-5b39d57053a5","Type":"ContainerStarted","Data":"f0be26251dd9bd3123c874c1a6febc7bb5dd218abfd3103aab9d69deb1ed8e2d"} Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.955729 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" event={"ID":"d4dfbfed-09bb-4b36-bcfb-326077388f98","Type":"ContainerStarted","Data":"e2c1adedb863100533ca284e1595cce1952474a58d468415fd8b8e322c8cd850"} Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.957491 4711 generic.go:334] "Generic (PLEG): container finished" podID="84dbad5b-6e48-48a8-bbe1-76f6e92eb785" containerID="570d7a98c3d76ef9a0ed0a24d03e45cdf052691099142369ff66aa174405ad7f" exitCode=0 Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.957825 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" event={"ID":"84dbad5b-6e48-48a8-bbe1-76f6e92eb785","Type":"ContainerDied","Data":"570d7a98c3d76ef9a0ed0a24d03e45cdf052691099142369ff66aa174405ad7f"} Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.958679 4711 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m5tws container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Dec 02 10:16:05 crc kubenswrapper[4711]: I1202 10:16:05.958748 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" podUID="8e8230b2-fb50-43c7-8a69-af1d02cce895" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.020478 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:06 crc kubenswrapper[4711]: E1202 10:16:06.020785 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:06.520751156 +0000 UTC m=+156.230117613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.036297 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:06 crc kubenswrapper[4711]: E1202 10:16:06.037878 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:06.537860552 +0000 UTC m=+156.247226999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.082831 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:06 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:06 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:06 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.082893 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.138554 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:06 crc kubenswrapper[4711]: E1202 10:16:06.139363 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:06.63934735 +0000 UTC m=+156.348713797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.244900 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:06 crc kubenswrapper[4711]: E1202 10:16:06.245299 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:06.745280928 +0000 UTC m=+156.454647375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.261891 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7l5th"] Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.263041 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.269119 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.288736 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7l5th"] Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.346282 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:06 crc kubenswrapper[4711]: E1202 10:16:06.346625 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:06.846608412 +0000 UTC m=+156.555974859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.449834 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37a3481-62b0-42fd-b6c9-198f0e5aac93-catalog-content\") pod \"community-operators-7l5th\" (UID: \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\") " pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.450259 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:06 crc kubenswrapper[4711]: E1202 10:16:06.450641 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:06.950623549 +0000 UTC m=+156.659990096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.450682 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37a3481-62b0-42fd-b6c9-198f0e5aac93-utilities\") pod \"community-operators-7l5th\" (UID: \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\") " pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.450725 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfn9x\" (UniqueName: \"kubernetes.io/projected/d37a3481-62b0-42fd-b6c9-198f0e5aac93-kube-api-access-nfn9x\") pod \"community-operators-7l5th\" (UID: \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\") " pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.514402 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tb9pv"] Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.515718 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.520673 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.538074 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tb9pv"] Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.551097 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.551215 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37a3481-62b0-42fd-b6c9-198f0e5aac93-utilities\") pod \"community-operators-7l5th\" (UID: \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\") " pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.551257 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfn9x\" (UniqueName: \"kubernetes.io/projected/d37a3481-62b0-42fd-b6c9-198f0e5aac93-kube-api-access-nfn9x\") pod \"community-operators-7l5th\" (UID: \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\") " pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.551311 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-728rt\" (UniqueName: \"kubernetes.io/projected/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-kube-api-access-728rt\") pod \"certified-operators-tb9pv\" (UID: \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\") " pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.551347 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-catalog-content\") pod \"certified-operators-tb9pv\" (UID: \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\") " pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.551386 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-utilities\") pod \"certified-operators-tb9pv\" (UID: \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\") " pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.551410 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37a3481-62b0-42fd-b6c9-198f0e5aac93-catalog-content\") pod \"community-operators-7l5th\" (UID: \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\") " pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.551849 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37a3481-62b0-42fd-b6c9-198f0e5aac93-catalog-content\") pod \"community-operators-7l5th\" (UID: \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\") " pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:16:06 crc kubenswrapper[4711]: E1202 10:16:06.551917 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:07.05190286 +0000 UTC m=+156.761269307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.552182 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37a3481-62b0-42fd-b6c9-198f0e5aac93-utilities\") pod \"community-operators-7l5th\" (UID: \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\") " pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.588973 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfn9x\" (UniqueName: \"kubernetes.io/projected/d37a3481-62b0-42fd-b6c9-198f0e5aac93-kube-api-access-nfn9x\") pod \"community-operators-7l5th\" (UID: \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\") " pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.612275 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bzwf5"] Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.613273 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.624937 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzwf5"] Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.656289 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-728rt\" (UniqueName: \"kubernetes.io/projected/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-kube-api-access-728rt\") pod \"certified-operators-tb9pv\" (UID: \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\") " pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.656336 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-catalog-content\") pod \"certified-operators-tb9pv\" (UID: \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\") " pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.656371 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-utilities\") pod \"certified-operators-tb9pv\" (UID: \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\") " pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.656410 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:06 crc kubenswrapper[4711]: E1202 10:16:06.656767 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:07.15675362 +0000 UTC m=+156.866120067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.657280 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-catalog-content\") pod \"certified-operators-tb9pv\" (UID: \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\") " pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.657475 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-utilities\") pod \"certified-operators-tb9pv\" (UID: \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\") " pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.709064 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-728rt\" (UniqueName: \"kubernetes.io/projected/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-kube-api-access-728rt\") pod \"certified-operators-tb9pv\" (UID: \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\") " pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.757602 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.757780 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-utilities\") pod \"community-operators-bzwf5\" (UID: \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\") " pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.757869 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs8h8\" (UniqueName: \"kubernetes.io/projected/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-kube-api-access-gs8h8\") pod \"community-operators-bzwf5\" (UID: \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\") " pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.757891 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-catalog-content\") pod \"community-operators-bzwf5\" (UID: \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\") " pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:16:06 crc kubenswrapper[4711]: E1202 10:16:06.758299 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:07.258281109 +0000 UTC m=+156.967647556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.821274 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vs5vb"] Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.822265 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.837601 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vs5vb"] Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.851802 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.859381 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38739913-4803-444c-b624-013235f6eec3-utilities\") pod \"certified-operators-vs5vb\" (UID: \"38739913-4803-444c-b624-013235f6eec3\") " pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.859449 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.859477 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs8h8\" (UniqueName: \"kubernetes.io/projected/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-kube-api-access-gs8h8\") pod \"community-operators-bzwf5\" (UID: \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\") " pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.859510 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-catalog-content\") pod \"community-operators-bzwf5\" (UID: \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\") " pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.859544 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38739913-4803-444c-b624-013235f6eec3-catalog-content\") pod \"certified-operators-vs5vb\" (UID: \"38739913-4803-444c-b624-013235f6eec3\") " pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.859570 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4swh\" (UniqueName: \"kubernetes.io/projected/38739913-4803-444c-b624-013235f6eec3-kube-api-access-s4swh\") pod \"certified-operators-vs5vb\" (UID: \"38739913-4803-444c-b624-013235f6eec3\") " pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.859610 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-utilities\") pod \"community-operators-bzwf5\" (UID: \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\") " pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.860030 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-utilities\") pod \"community-operators-bzwf5\" (UID: \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\") " pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.860285 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-catalog-content\") pod \"community-operators-bzwf5\" (UID: \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\") " pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:16:06 crc kubenswrapper[4711]: E1202 10:16:06.860436 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:07.360415875 +0000 UTC m=+157.069782312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.888365 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.971482 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.971661 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38739913-4803-444c-b624-013235f6eec3-utilities\") pod \"certified-operators-vs5vb\" (UID: \"38739913-4803-444c-b624-013235f6eec3\") " pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.971718 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38739913-4803-444c-b624-013235f6eec3-catalog-content\") pod \"certified-operators-vs5vb\" (UID: \"38739913-4803-444c-b624-013235f6eec3\") " pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.971741 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4swh\" (UniqueName: \"kubernetes.io/projected/38739913-4803-444c-b624-013235f6eec3-kube-api-access-s4swh\") pod \"certified-operators-vs5vb\" (UID: \"38739913-4803-444c-b624-013235f6eec3\") " pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:16:06 crc kubenswrapper[4711]: E1202 10:16:06.972127 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:07.472092149 +0000 UTC m=+157.181458596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.972467 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38739913-4803-444c-b624-013235f6eec3-utilities\") pod \"certified-operators-vs5vb\" (UID: \"38739913-4803-444c-b624-013235f6eec3\") " pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.972658 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38739913-4803-444c-b624-013235f6eec3-catalog-content\") pod \"certified-operators-vs5vb\" (UID: \"38739913-4803-444c-b624-013235f6eec3\") " pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.985943 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs8h8\" (UniqueName: \"kubernetes.io/projected/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-kube-api-access-gs8h8\") pod \"community-operators-bzwf5\" (UID: \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\") " pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:16:06 crc kubenswrapper[4711]: I1202 10:16:06.988243 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4swh\" (UniqueName: \"kubernetes.io/projected/38739913-4803-444c-b624-013235f6eec3-kube-api-access-s4swh\") pod \"certified-operators-vs5vb\" (UID: \"38739913-4803-444c-b624-013235f6eec3\") " pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.031215 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"64a42c54-5124-490a-8f2c-5b39d57053a5","Type":"ContainerStarted","Data":"611014841caa8db19bc30d08e4c094166d5125b33c1734696e4da26326f28e62"} Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.033920 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" event={"ID":"d4dfbfed-09bb-4b36-bcfb-326077388f98","Type":"ContainerStarted","Data":"e61eeedc8b436c9f85bc43084f2060bd1856e781606ac024eef90154404b051a"} Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.037512 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.037560 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.037567 4711 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m2lb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.037658 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4m2lb" podUID="585fe769-d9ad-42f1-8cb6-29904018f637" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.038187 4711 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m2lb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.038230 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4m2lb" podUID="585fe769-d9ad-42f1-8cb6-29904018f637" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.063226 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.063205575 podStartE2EDuration="4.063205575s" podCreationTimestamp="2025-12-02 10:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:07.062436164 +0000 UTC m=+156.771802601" watchObservedRunningTime="2025-12-02 10:16:07.063205575 +0000 UTC m=+156.772572042" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.078136 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.092526 4711 patch_prober.go:28] interesting pod/console-f9d7485db-g2lxx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.092578 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2lxx" podUID="8c1f70ef-1183-4621-bb91-ffe2d31fa391" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.093383 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:07 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:07 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:07 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.093410 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.093990 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:07 crc kubenswrapper[4711]: E1202 10:16:07.094293 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:07.594278842 +0000 UTC m=+157.303645289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.134895 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.143364 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.143400 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.172181 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.186496 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.186625 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.197502 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:07 crc kubenswrapper[4711]: E1202 10:16:07.198590 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:16:07.698565496 +0000 UTC m=+157.407931943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.224587 4711 patch_prober.go:28] interesting pod/apiserver-76f77b778f-2c7s8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.224687 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" podUID="b9bbe773-63b6-490e-a058-a12050a40b4a" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.239988 4711 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.249828 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.344404 4711 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-02T10:16:07.240019017Z","Handler":null,"Name":""} Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.345807 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:07 crc kubenswrapper[4711]: E1202 10:16:07.346204 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:16:07.846190663 +0000 UTC m=+157.555557110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jc7xv" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.349533 4711 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.349641 4711 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.446591 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.481276 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.492583 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l54rr" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.550562 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.561275 4711 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.561313 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.618044 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jc7xv\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.653708 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.665821 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tb9pv"] Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.714352 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vs5vb"] Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.724383 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7l5th"] Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.755072 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-secret-volume\") pod \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\" (UID: \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\") " Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.755178 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d86dp\" (UniqueName: \"kubernetes.io/projected/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-kube-api-access-d86dp\") pod \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\" (UID: \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\") " Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.755265 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-config-volume\") pod \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\" (UID: \"84dbad5b-6e48-48a8-bbe1-76f6e92eb785\") " Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.755906 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-config-volume" (OuterVolumeSpecName: "config-volume") pod "84dbad5b-6e48-48a8-bbe1-76f6e92eb785" (UID: "84dbad5b-6e48-48a8-bbe1-76f6e92eb785"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.758458 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzwf5"] Dec 02 10:16:07 crc kubenswrapper[4711]: W1202 10:16:07.775240 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd37a3481_62b0_42fd_b6c9_198f0e5aac93.slice/crio-c3660ee566566ece5bb4d9d6e041ddc35020d0c2a636e9efc7e63caffcf16d40 WatchSource:0}: Error finding container c3660ee566566ece5bb4d9d6e041ddc35020d0c2a636e9efc7e63caffcf16d40: Status 404 returned error can't find the container with id c3660ee566566ece5bb4d9d6e041ddc35020d0c2a636e9efc7e63caffcf16d40 Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.775415 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-kube-api-access-d86dp" (OuterVolumeSpecName: "kube-api-access-d86dp") pod "84dbad5b-6e48-48a8-bbe1-76f6e92eb785" (UID: "84dbad5b-6e48-48a8-bbe1-76f6e92eb785"). InnerVolumeSpecName "kube-api-access-d86dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.775781 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "84dbad5b-6e48-48a8-bbe1-76f6e92eb785" (UID: "84dbad5b-6e48-48a8-bbe1-76f6e92eb785"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.789540 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.853229 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.856536 4711 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.856556 4711 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:16:07 crc kubenswrapper[4711]: I1202 10:16:07.856565 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d86dp\" (UniqueName: \"kubernetes.io/projected/84dbad5b-6e48-48a8-bbe1-76f6e92eb785-kube-api-access-d86dp\") on node \"crc\" DevicePath \"\"" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.041493 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs5vb" event={"ID":"38739913-4803-444c-b624-013235f6eec3","Type":"ContainerStarted","Data":"72d969900a7fc329ac9d76d8bbf5be6d20d70402a77bf4dd2f7f8f66f5cc1a8e"} Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.043521 4711 generic.go:334] "Generic (PLEG): container finished" podID="64a42c54-5124-490a-8f2c-5b39d57053a5" containerID="611014841caa8db19bc30d08e4c094166d5125b33c1734696e4da26326f28e62" exitCode=0 Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.043575 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"64a42c54-5124-490a-8f2c-5b39d57053a5","Type":"ContainerDied","Data":"611014841caa8db19bc30d08e4c094166d5125b33c1734696e4da26326f28e62"} Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.050009 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" event={"ID":"d4dfbfed-09bb-4b36-bcfb-326077388f98","Type":"ContainerStarted","Data":"e6b7e4b254e1d744092cb3be1f0bf4d676e625bb4e3333ae0ee45a8f8958be25"} Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.052652 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwf5" event={"ID":"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb","Type":"ContainerStarted","Data":"1300ce6053f2760af8f660e0ddc04728eca159f8c1866e27e7a5b50e66a9c2a0"} Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.053678 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb9pv" event={"ID":"ef385405-9334-4ba2-a7ed-abd9d51cbd5d","Type":"ContainerStarted","Data":"397ba8b2604ea168b294cf201afcff7a6dce33b7c994b30c0cb7006b78c12fa8"} Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.053697 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb9pv" event={"ID":"ef385405-9334-4ba2-a7ed-abd9d51cbd5d","Type":"ContainerStarted","Data":"67791e10e6ec82588058d250bd9ed0e25b8eb2bc7100bd270ddeafda46ab00dc"} Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.054517 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7l5th" event={"ID":"d37a3481-62b0-42fd-b6c9-198f0e5aac93","Type":"ContainerStarted","Data":"c3660ee566566ece5bb4d9d6e041ddc35020d0c2a636e9efc7e63caffcf16d40"} Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.058133 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" event={"ID":"84dbad5b-6e48-48a8-bbe1-76f6e92eb785","Type":"ContainerDied","Data":"7cffe44303ae84964e12902f63621ada5bc83309b739f503a0f6f514910950f2"} Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.058211 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cffe44303ae84964e12902f63621ada5bc83309b739f503a0f6f514910950f2" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.058289 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.083484 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:08 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:08 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:08 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.083542 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.085923 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhgqn" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.098377 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-mrbgr" podStartSLOduration=15.098347975 podStartE2EDuration="15.098347975s" podCreationTimestamp="2025-12-02 10:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:08.096929346 +0000 UTC m=+157.806295793" watchObservedRunningTime="2025-12-02 10:16:08.098347975 +0000 UTC m=+157.807714422" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.198844 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-grd8v"] Dec 02 10:16:08 crc kubenswrapper[4711]: E1202 10:16:08.203176 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84dbad5b-6e48-48a8-bbe1-76f6e92eb785" containerName="collect-profiles" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.203240 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="84dbad5b-6e48-48a8-bbe1-76f6e92eb785" containerName="collect-profiles" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.203373 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="84dbad5b-6e48-48a8-bbe1-76f6e92eb785" containerName="collect-profiles" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.204235 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.211831 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grd8v"] Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.213597 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.351069 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jc7xv"] Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.370318 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c729b20-2f40-43b6-8432-062f8a6cce37-catalog-content\") pod \"redhat-marketplace-grd8v\" (UID: \"0c729b20-2f40-43b6-8432-062f8a6cce37\") " pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.370366 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c729b20-2f40-43b6-8432-062f8a6cce37-utilities\") pod \"redhat-marketplace-grd8v\" (UID: \"0c729b20-2f40-43b6-8432-062f8a6cce37\") " pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.370413 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frsxm\" (UniqueName: \"kubernetes.io/projected/0c729b20-2f40-43b6-8432-062f8a6cce37-kube-api-access-frsxm\") pod \"redhat-marketplace-grd8v\" (UID: \"0c729b20-2f40-43b6-8432-062f8a6cce37\") " pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:16:08 crc kubenswrapper[4711]: W1202 10:16:08.428606 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd31bf5d5_5c80_4e32_ac69_6fb031c2cdf0.slice/crio-85ac36a1471a71c665f7dc8c1139c07a9fbca9df659ea7249168264eae3aba50 WatchSource:0}: Error finding container 85ac36a1471a71c665f7dc8c1139c07a9fbca9df659ea7249168264eae3aba50: Status 404 returned error can't find the container with id 85ac36a1471a71c665f7dc8c1139c07a9fbca9df659ea7249168264eae3aba50 Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.471925 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frsxm\" (UniqueName: \"kubernetes.io/projected/0c729b20-2f40-43b6-8432-062f8a6cce37-kube-api-access-frsxm\") pod \"redhat-marketplace-grd8v\" (UID: \"0c729b20-2f40-43b6-8432-062f8a6cce37\") " pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.472056 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c729b20-2f40-43b6-8432-062f8a6cce37-catalog-content\") pod \"redhat-marketplace-grd8v\" (UID: \"0c729b20-2f40-43b6-8432-062f8a6cce37\") " pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.472084 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c729b20-2f40-43b6-8432-062f8a6cce37-utilities\") pod \"redhat-marketplace-grd8v\" (UID: \"0c729b20-2f40-43b6-8432-062f8a6cce37\") " pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.472624 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c729b20-2f40-43b6-8432-062f8a6cce37-utilities\") pod \"redhat-marketplace-grd8v\" (UID: \"0c729b20-2f40-43b6-8432-062f8a6cce37\") " pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.473216 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c729b20-2f40-43b6-8432-062f8a6cce37-catalog-content\") pod \"redhat-marketplace-grd8v\" (UID: \"0c729b20-2f40-43b6-8432-062f8a6cce37\") " pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.493115 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frsxm\" (UniqueName: \"kubernetes.io/projected/0c729b20-2f40-43b6-8432-062f8a6cce37-kube-api-access-frsxm\") pod \"redhat-marketplace-grd8v\" (UID: \"0c729b20-2f40-43b6-8432-062f8a6cce37\") " pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.537132 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.596767 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-brtmx"] Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.597800 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.607432 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brtmx"] Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.675146 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/226fe786-af06-4294-a1e5-7e1c4aa86551-catalog-content\") pod \"redhat-marketplace-brtmx\" (UID: \"226fe786-af06-4294-a1e5-7e1c4aa86551\") " pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.675186 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v6mx\" (UniqueName: \"kubernetes.io/projected/226fe786-af06-4294-a1e5-7e1c4aa86551-kube-api-access-6v6mx\") pod \"redhat-marketplace-brtmx\" (UID: \"226fe786-af06-4294-a1e5-7e1c4aa86551\") " pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.675206 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/226fe786-af06-4294-a1e5-7e1c4aa86551-utilities\") pod \"redhat-marketplace-brtmx\" (UID: \"226fe786-af06-4294-a1e5-7e1c4aa86551\") " pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.776798 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/226fe786-af06-4294-a1e5-7e1c4aa86551-catalog-content\") pod \"redhat-marketplace-brtmx\" (UID: \"226fe786-af06-4294-a1e5-7e1c4aa86551\") " pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.776850 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v6mx\" (UniqueName: \"kubernetes.io/projected/226fe786-af06-4294-a1e5-7e1c4aa86551-kube-api-access-6v6mx\") pod \"redhat-marketplace-brtmx\" (UID: \"226fe786-af06-4294-a1e5-7e1c4aa86551\") " pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.776869 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/226fe786-af06-4294-a1e5-7e1c4aa86551-utilities\") pod \"redhat-marketplace-brtmx\" (UID: \"226fe786-af06-4294-a1e5-7e1c4aa86551\") " pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.777532 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/226fe786-af06-4294-a1e5-7e1c4aa86551-utilities\") pod \"redhat-marketplace-brtmx\" (UID: \"226fe786-af06-4294-a1e5-7e1c4aa86551\") " pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.777853 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/226fe786-af06-4294-a1e5-7e1c4aa86551-catalog-content\") pod \"redhat-marketplace-brtmx\" (UID: \"226fe786-af06-4294-a1e5-7e1c4aa86551\") " pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.792128 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grd8v"] Dec 02 10:16:08 crc kubenswrapper[4711]: W1202 10:16:08.797937 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c729b20_2f40_43b6_8432_062f8a6cce37.slice/crio-915ddd5e36cb156220a1d0b907e4fc71ae4f39cde81572096ee79bee0d922e9c WatchSource:0}: Error finding container 915ddd5e36cb156220a1d0b907e4fc71ae4f39cde81572096ee79bee0d922e9c: Status 404 returned error can't find the container with id 915ddd5e36cb156220a1d0b907e4fc71ae4f39cde81572096ee79bee0d922e9c Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.800226 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v6mx\" (UniqueName: \"kubernetes.io/projected/226fe786-af06-4294-a1e5-7e1c4aa86551-kube-api-access-6v6mx\") pod \"redhat-marketplace-brtmx\" (UID: \"226fe786-af06-4294-a1e5-7e1c4aa86551\") " pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:08 crc kubenswrapper[4711]: I1202 10:16:08.945424 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.072145 4711 generic.go:334] "Generic (PLEG): container finished" podID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" containerID="397ba8b2604ea168b294cf201afcff7a6dce33b7c994b30c0cb7006b78c12fa8" exitCode=0 Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.072253 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb9pv" event={"ID":"ef385405-9334-4ba2-a7ed-abd9d51cbd5d","Type":"ContainerDied","Data":"397ba8b2604ea168b294cf201afcff7a6dce33b7c994b30c0cb7006b78c12fa8"} Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.075436 4711 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.083005 4711 generic.go:334] "Generic (PLEG): container finished" podID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" containerID="662882d49165964bb639c286db0bf8d9037bf236830b1c15b54787c0a19e05ff" exitCode=0 Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.084897 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:09 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:09 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:09 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.084961 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.086362 4711 generic.go:334] "Generic (PLEG): container finished" podID="38739913-4803-444c-b624-013235f6eec3" containerID="9d29999d337e1f7b64bd00f063a2153b2b633c10157b9e893fbb285c63b86ee7" exitCode=0 Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.107493 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.108326 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7l5th" event={"ID":"d37a3481-62b0-42fd-b6c9-198f0e5aac93","Type":"ContainerDied","Data":"662882d49165964bb639c286db0bf8d9037bf236830b1c15b54787c0a19e05ff"} Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.108370 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs5vb" event={"ID":"38739913-4803-444c-b624-013235f6eec3","Type":"ContainerDied","Data":"9d29999d337e1f7b64bd00f063a2153b2b633c10157b9e893fbb285c63b86ee7"} Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.108387 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grd8v" event={"ID":"0c729b20-2f40-43b6-8432-062f8a6cce37","Type":"ContainerStarted","Data":"915ddd5e36cb156220a1d0b907e4fc71ae4f39cde81572096ee79bee0d922e9c"} Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.119063 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" event={"ID":"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0","Type":"ContainerStarted","Data":"60ba3279eb3a9ca09f7c929f727814921a6570dbe53e2ff6c2621c78bf414567"} Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.119117 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" event={"ID":"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0","Type":"ContainerStarted","Data":"85ac36a1471a71c665f7dc8c1139c07a9fbca9df659ea7249168264eae3aba50"} Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.119297 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.122423 4711 generic.go:334] "Generic (PLEG): container finished" podID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" containerID="1e48c1a8f5447e662df44305da2641363e22ae249841622e164f2a72b15586a5" exitCode=0 Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.125644 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwf5" event={"ID":"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb","Type":"ContainerDied","Data":"1e48c1a8f5447e662df44305da2641363e22ae249841622e164f2a72b15586a5"} Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.188096 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" podStartSLOduration=138.188043412 podStartE2EDuration="2m18.188043412s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:09.183517189 +0000 UTC m=+158.892883636" watchObservedRunningTime="2025-12-02 10:16:09.188043412 +0000 UTC m=+158.897409859" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.368603 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brtmx"] Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.387081 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7fgpt"] Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.388111 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.390294 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.399015 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7fgpt"] Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.492222 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blq7g\" (UniqueName: \"kubernetes.io/projected/a3c13742-8972-4320-8b08-ced2c55156d3-kube-api-access-blq7g\") pod \"redhat-operators-7fgpt\" (UID: \"a3c13742-8972-4320-8b08-ced2c55156d3\") " pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.492275 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c13742-8972-4320-8b08-ced2c55156d3-utilities\") pod \"redhat-operators-7fgpt\" (UID: \"a3c13742-8972-4320-8b08-ced2c55156d3\") " pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.492298 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c13742-8972-4320-8b08-ced2c55156d3-catalog-content\") pod \"redhat-operators-7fgpt\" (UID: \"a3c13742-8972-4320-8b08-ced2c55156d3\") " pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.537053 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.593560 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blq7g\" (UniqueName: \"kubernetes.io/projected/a3c13742-8972-4320-8b08-ced2c55156d3-kube-api-access-blq7g\") pod \"redhat-operators-7fgpt\" (UID: \"a3c13742-8972-4320-8b08-ced2c55156d3\") " pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.593647 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c13742-8972-4320-8b08-ced2c55156d3-utilities\") pod \"redhat-operators-7fgpt\" (UID: \"a3c13742-8972-4320-8b08-ced2c55156d3\") " pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.593684 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c13742-8972-4320-8b08-ced2c55156d3-catalog-content\") pod \"redhat-operators-7fgpt\" (UID: \"a3c13742-8972-4320-8b08-ced2c55156d3\") " pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.595374 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c13742-8972-4320-8b08-ced2c55156d3-catalog-content\") pod \"redhat-operators-7fgpt\" (UID: \"a3c13742-8972-4320-8b08-ced2c55156d3\") " pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.595391 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c13742-8972-4320-8b08-ced2c55156d3-utilities\") pod \"redhat-operators-7fgpt\" (UID: \"a3c13742-8972-4320-8b08-ced2c55156d3\") " pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.616724 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blq7g\" (UniqueName: \"kubernetes.io/projected/a3c13742-8972-4320-8b08-ced2c55156d3-kube-api-access-blq7g\") pod \"redhat-operators-7fgpt\" (UID: \"a3c13742-8972-4320-8b08-ced2c55156d3\") " pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.694743 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64a42c54-5124-490a-8f2c-5b39d57053a5-kubelet-dir\") pod \"64a42c54-5124-490a-8f2c-5b39d57053a5\" (UID: \"64a42c54-5124-490a-8f2c-5b39d57053a5\") " Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.694870 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64a42c54-5124-490a-8f2c-5b39d57053a5-kube-api-access\") pod \"64a42c54-5124-490a-8f2c-5b39d57053a5\" (UID: \"64a42c54-5124-490a-8f2c-5b39d57053a5\") " Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.695627 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64a42c54-5124-490a-8f2c-5b39d57053a5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "64a42c54-5124-490a-8f2c-5b39d57053a5" (UID: "64a42c54-5124-490a-8f2c-5b39d57053a5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.698184 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a42c54-5124-490a-8f2c-5b39d57053a5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "64a42c54-5124-490a-8f2c-5b39d57053a5" (UID: "64a42c54-5124-490a-8f2c-5b39d57053a5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.789680 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dghgm"] Dec 02 10:16:09 crc kubenswrapper[4711]: E1202 10:16:09.790027 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a42c54-5124-490a-8f2c-5b39d57053a5" containerName="pruner" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.790047 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a42c54-5124-490a-8f2c-5b39d57053a5" containerName="pruner" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.790214 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a42c54-5124-490a-8f2c-5b39d57053a5" containerName="pruner" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.791433 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.797214 4711 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64a42c54-5124-490a-8f2c-5b39d57053a5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.797245 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64a42c54-5124-490a-8f2c-5b39d57053a5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.811216 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dghgm"] Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.836033 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.898320 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41802e85-ca3f-4296-85ce-84bc4f4169d0-utilities\") pod \"redhat-operators-dghgm\" (UID: \"41802e85-ca3f-4296-85ce-84bc4f4169d0\") " pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.898393 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41802e85-ca3f-4296-85ce-84bc4f4169d0-catalog-content\") pod \"redhat-operators-dghgm\" (UID: \"41802e85-ca3f-4296-85ce-84bc4f4169d0\") " pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.898438 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r5jl\" (UniqueName: \"kubernetes.io/projected/41802e85-ca3f-4296-85ce-84bc4f4169d0-kube-api-access-2r5jl\") pod \"redhat-operators-dghgm\" (UID: \"41802e85-ca3f-4296-85ce-84bc4f4169d0\") " pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.999351 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41802e85-ca3f-4296-85ce-84bc4f4169d0-utilities\") pod \"redhat-operators-dghgm\" (UID: \"41802e85-ca3f-4296-85ce-84bc4f4169d0\") " pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.999408 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41802e85-ca3f-4296-85ce-84bc4f4169d0-catalog-content\") pod \"redhat-operators-dghgm\" (UID: \"41802e85-ca3f-4296-85ce-84bc4f4169d0\") " pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:16:09 crc kubenswrapper[4711]: I1202 10:16:09.999463 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r5jl\" (UniqueName: \"kubernetes.io/projected/41802e85-ca3f-4296-85ce-84bc4f4169d0-kube-api-access-2r5jl\") pod \"redhat-operators-dghgm\" (UID: \"41802e85-ca3f-4296-85ce-84bc4f4169d0\") " pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.000340 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41802e85-ca3f-4296-85ce-84bc4f4169d0-utilities\") pod \"redhat-operators-dghgm\" (UID: \"41802e85-ca3f-4296-85ce-84bc4f4169d0\") " pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.000458 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41802e85-ca3f-4296-85ce-84bc4f4169d0-catalog-content\") pod \"redhat-operators-dghgm\" (UID: \"41802e85-ca3f-4296-85ce-84bc4f4169d0\") " pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.024418 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r5jl\" (UniqueName: \"kubernetes.io/projected/41802e85-ca3f-4296-85ce-84bc4f4169d0-kube-api-access-2r5jl\") pod \"redhat-operators-dghgm\" (UID: \"41802e85-ca3f-4296-85ce-84bc4f4169d0\") " pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.075517 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7fgpt"] Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.080404 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:10 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:10 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:10 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.080441 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:10 crc kubenswrapper[4711]: W1202 10:16:10.080903 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3c13742_8972_4320_8b08_ced2c55156d3.slice/crio-fe60fe066a5e5af5481a75b2be40efd7986c1d926696fe6007c399599f9d1c63 WatchSource:0}: Error finding container fe60fe066a5e5af5481a75b2be40efd7986c1d926696fe6007c399599f9d1c63: Status 404 returned error can't find the container with id fe60fe066a5e5af5481a75b2be40efd7986c1d926696fe6007c399599f9d1c63 Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.107863 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.223547 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fgpt" event={"ID":"a3c13742-8972-4320-8b08-ced2c55156d3","Type":"ContainerStarted","Data":"fe60fe066a5e5af5481a75b2be40efd7986c1d926696fe6007c399599f9d1c63"} Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.225474 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brtmx" event={"ID":"226fe786-af06-4294-a1e5-7e1c4aa86551","Type":"ContainerStarted","Data":"d16272ffd3910a63afbb40e68c704cf5c70fe646c23a0f8865a1a465ec985273"} Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.229349 4711 generic.go:334] "Generic (PLEG): container finished" podID="0c729b20-2f40-43b6-8432-062f8a6cce37" containerID="59cc8d68aac28b724bf00f489b3eb117f34096c8ec6dd8e9d531a2a1ca35eb4a" exitCode=0 Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.229424 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grd8v" event={"ID":"0c729b20-2f40-43b6-8432-062f8a6cce37","Type":"ContainerDied","Data":"59cc8d68aac28b724bf00f489b3eb117f34096c8ec6dd8e9d531a2a1ca35eb4a"} Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.236310 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.236717 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"64a42c54-5124-490a-8f2c-5b39d57053a5","Type":"ContainerDied","Data":"f0be26251dd9bd3123c874c1a6febc7bb5dd218abfd3103aab9d69deb1ed8e2d"} Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.236772 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0be26251dd9bd3123c874c1a6febc7bb5dd218abfd3103aab9d69deb1ed8e2d" Dec 02 10:16:10 crc kubenswrapper[4711]: I1202 10:16:10.331393 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dghgm"] Dec 02 10:16:10 crc kubenswrapper[4711]: W1202 10:16:10.339851 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41802e85_ca3f_4296_85ce_84bc4f4169d0.slice/crio-158f6681fa7f774fb67849d77416b0bf260886ed1ed6534b4c7bdf424bc7dd74 WatchSource:0}: Error finding container 158f6681fa7f774fb67849d77416b0bf260886ed1ed6534b4c7bdf424bc7dd74: Status 404 returned error can't find the container with id 158f6681fa7f774fb67849d77416b0bf260886ed1ed6534b4c7bdf424bc7dd74 Dec 02 10:16:11 crc kubenswrapper[4711]: I1202 10:16:11.081030 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:11 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:11 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:11 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:11 crc kubenswrapper[4711]: I1202 10:16:11.081437 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:11 crc kubenswrapper[4711]: I1202 10:16:11.247091 4711 generic.go:334] "Generic (PLEG): container finished" podID="226fe786-af06-4294-a1e5-7e1c4aa86551" containerID="2d4c9edee327c65db2186d751967137eae2faf9998ca8a08a1b3aa920bd511d0" exitCode=0 Dec 02 10:16:11 crc kubenswrapper[4711]: I1202 10:16:11.247171 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brtmx" event={"ID":"226fe786-af06-4294-a1e5-7e1c4aa86551","Type":"ContainerDied","Data":"2d4c9edee327c65db2186d751967137eae2faf9998ca8a08a1b3aa920bd511d0"} Dec 02 10:16:11 crc kubenswrapper[4711]: I1202 10:16:11.248771 4711 generic.go:334] "Generic (PLEG): container finished" podID="41802e85-ca3f-4296-85ce-84bc4f4169d0" containerID="c2fd1714ee0e78976931ca23279d24d6e489204760621f33ed543dd9a8d0657a" exitCode=0 Dec 02 10:16:11 crc kubenswrapper[4711]: I1202 10:16:11.248843 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghgm" event={"ID":"41802e85-ca3f-4296-85ce-84bc4f4169d0","Type":"ContainerDied","Data":"c2fd1714ee0e78976931ca23279d24d6e489204760621f33ed543dd9a8d0657a"} Dec 02 10:16:11 crc kubenswrapper[4711]: I1202 10:16:11.248879 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghgm" event={"ID":"41802e85-ca3f-4296-85ce-84bc4f4169d0","Type":"ContainerStarted","Data":"158f6681fa7f774fb67849d77416b0bf260886ed1ed6534b4c7bdf424bc7dd74"} Dec 02 10:16:11 crc kubenswrapper[4711]: I1202 10:16:11.251019 4711 generic.go:334] "Generic (PLEG): container finished" podID="a3c13742-8972-4320-8b08-ced2c55156d3" containerID="e02acd59809fd9266f4df27e3908dfda9ef7f2eaf8fceb6b54e4f747c9026f07" exitCode=0 Dec 02 10:16:11 crc kubenswrapper[4711]: I1202 10:16:11.251058 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fgpt" event={"ID":"a3c13742-8972-4320-8b08-ced2c55156d3","Type":"ContainerDied","Data":"e02acd59809fd9266f4df27e3908dfda9ef7f2eaf8fceb6b54e4f747c9026f07"} Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.015505 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.016218 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.019068 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.019407 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.023634 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.080478 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:12 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:12 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:12 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.080578 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.140626 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d01871-cd3f-4ebf-84b2-f302db3e0452-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a0d01871-cd3f-4ebf-84b2-f302db3e0452\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.140725 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0d01871-cd3f-4ebf-84b2-f302db3e0452-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a0d01871-cd3f-4ebf-84b2-f302db3e0452\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.191795 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.196754 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-2c7s8" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.242770 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d01871-cd3f-4ebf-84b2-f302db3e0452-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a0d01871-cd3f-4ebf-84b2-f302db3e0452\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.242847 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0d01871-cd3f-4ebf-84b2-f302db3e0452-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a0d01871-cd3f-4ebf-84b2-f302db3e0452\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.242931 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0d01871-cd3f-4ebf-84b2-f302db3e0452-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a0d01871-cd3f-4ebf-84b2-f302db3e0452\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.296610 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d01871-cd3f-4ebf-84b2-f302db3e0452-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a0d01871-cd3f-4ebf-84b2-f302db3e0452\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.416592 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.546296 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qzmsv" Dec 02 10:16:12 crc kubenswrapper[4711]: I1202 10:16:12.767571 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 10:16:12 crc kubenswrapper[4711]: W1202 10:16:12.778836 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda0d01871_cd3f_4ebf_84b2_f302db3e0452.slice/crio-ff891027b637ca8223a924b4d04fc6b4cb7d0b99d006b4602fd1083bcab77ae8 WatchSource:0}: Error finding container ff891027b637ca8223a924b4d04fc6b4cb7d0b99d006b4602fd1083bcab77ae8: Status 404 returned error can't find the container with id ff891027b637ca8223a924b4d04fc6b4cb7d0b99d006b4602fd1083bcab77ae8 Dec 02 10:16:13 crc kubenswrapper[4711]: I1202 10:16:13.082540 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:13 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:13 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:13 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:13 crc kubenswrapper[4711]: I1202 10:16:13.083061 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:13 crc kubenswrapper[4711]: I1202 10:16:13.281154 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a0d01871-cd3f-4ebf-84b2-f302db3e0452","Type":"ContainerStarted","Data":"3ebf034e6b558f7108952c8b05a5320ff6c5cd99271eab2a5de60ddf5c8a39ad"} Dec 02 10:16:13 crc kubenswrapper[4711]: I1202 10:16:13.281199 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a0d01871-cd3f-4ebf-84b2-f302db3e0452","Type":"ContainerStarted","Data":"ff891027b637ca8223a924b4d04fc6b4cb7d0b99d006b4602fd1083bcab77ae8"} Dec 02 10:16:13 crc kubenswrapper[4711]: I1202 10:16:13.582116 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:16:13 crc kubenswrapper[4711]: I1202 10:16:13.589870 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87347875-9865-4380-a0ea-3fde5596dce7-metrics-certs\") pod \"network-metrics-daemon-c82q2\" (UID: \"87347875-9865-4380-a0ea-3fde5596dce7\") " pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:16:13 crc kubenswrapper[4711]: I1202 10:16:13.598630 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c82q2" Dec 02 10:16:14 crc kubenswrapper[4711]: I1202 10:16:14.080019 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:14 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:14 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:14 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:14 crc kubenswrapper[4711]: I1202 10:16:14.080278 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:14 crc kubenswrapper[4711]: I1202 10:16:14.080016 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c82q2"] Dec 02 10:16:14 crc kubenswrapper[4711]: W1202 10:16:14.087337 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87347875_9865_4380_a0ea_3fde5596dce7.slice/crio-8c4103e458d77d4ccf1747d9dc0d501329856021f64e05960831a16844e46596 WatchSource:0}: Error finding container 8c4103e458d77d4ccf1747d9dc0d501329856021f64e05960831a16844e46596: Status 404 returned error can't find the container with id 8c4103e458d77d4ccf1747d9dc0d501329856021f64e05960831a16844e46596 Dec 02 10:16:14 crc kubenswrapper[4711]: I1202 10:16:14.289911 4711 generic.go:334] "Generic (PLEG): container finished" podID="a0d01871-cd3f-4ebf-84b2-f302db3e0452" containerID="3ebf034e6b558f7108952c8b05a5320ff6c5cd99271eab2a5de60ddf5c8a39ad" exitCode=0 Dec 02 10:16:14 crc kubenswrapper[4711]: I1202 10:16:14.290023 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a0d01871-cd3f-4ebf-84b2-f302db3e0452","Type":"ContainerDied","Data":"3ebf034e6b558f7108952c8b05a5320ff6c5cd99271eab2a5de60ddf5c8a39ad"} Dec 02 10:16:14 crc kubenswrapper[4711]: I1202 10:16:14.291842 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c82q2" event={"ID":"87347875-9865-4380-a0ea-3fde5596dce7","Type":"ContainerStarted","Data":"8c4103e458d77d4ccf1747d9dc0d501329856021f64e05960831a16844e46596"} Dec 02 10:16:15 crc kubenswrapper[4711]: I1202 10:16:15.080506 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:15 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:15 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:15 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:15 crc kubenswrapper[4711]: I1202 10:16:15.080551 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:15 crc kubenswrapper[4711]: I1202 10:16:15.318118 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c82q2" event={"ID":"87347875-9865-4380-a0ea-3fde5596dce7","Type":"ContainerStarted","Data":"87984a9d96d51b81822d44c4764408c13cf8d95c8e169e7e1c62a10ff7c40e89"} Dec 02 10:16:15 crc kubenswrapper[4711]: I1202 10:16:15.318484 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c82q2" event={"ID":"87347875-9865-4380-a0ea-3fde5596dce7","Type":"ContainerStarted","Data":"9d5d1a6c661a37fc5d7907aa28c68341f4c1217460142de8b6cdb05ee23688b7"} Dec 02 10:16:16 crc kubenswrapper[4711]: I1202 10:16:16.079577 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:16 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:16 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:16 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:16 crc kubenswrapper[4711]: I1202 10:16:16.079639 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:16 crc kubenswrapper[4711]: I1202 10:16:16.346330 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-c82q2" podStartSLOduration=145.346300784 podStartE2EDuration="2m25.346300784s" podCreationTimestamp="2025-12-02 10:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:16.34582906 +0000 UTC m=+166.055195537" watchObservedRunningTime="2025-12-02 10:16:16.346300784 +0000 UTC m=+166.055667231" Dec 02 10:16:17 crc kubenswrapper[4711]: I1202 10:16:17.014676 4711 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m2lb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 02 10:16:17 crc kubenswrapper[4711]: I1202 10:16:17.014735 4711 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m2lb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 02 10:16:17 crc kubenswrapper[4711]: I1202 10:16:17.014758 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4m2lb" podUID="585fe769-d9ad-42f1-8cb6-29904018f637" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 02 10:16:17 crc kubenswrapper[4711]: I1202 10:16:17.014799 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4m2lb" podUID="585fe769-d9ad-42f1-8cb6-29904018f637" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 02 10:16:17 crc kubenswrapper[4711]: I1202 10:16:17.038089 4711 patch_prober.go:28] interesting pod/console-f9d7485db-g2lxx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Dec 02 10:16:17 crc kubenswrapper[4711]: I1202 10:16:17.038154 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2lxx" podUID="8c1f70ef-1183-4621-bb91-ffe2d31fa391" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Dec 02 10:16:17 crc kubenswrapper[4711]: I1202 10:16:17.080033 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:17 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:17 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:17 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:17 crc kubenswrapper[4711]: I1202 10:16:17.080104 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:18 crc kubenswrapper[4711]: I1202 10:16:18.080184 4711 patch_prober.go:28] interesting pod/router-default-5444994796-zrj4j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:16:18 crc kubenswrapper[4711]: [-]has-synced failed: reason withheld Dec 02 10:16:18 crc kubenswrapper[4711]: [+]process-running ok Dec 02 10:16:18 crc kubenswrapper[4711]: healthz check failed Dec 02 10:16:18 crc kubenswrapper[4711]: I1202 10:16:18.080275 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrj4j" podUID="34333510-8dc2-45c2-9c08-013bdb2bcd85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:16:19 crc kubenswrapper[4711]: I1202 10:16:19.092617 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:16:19 crc kubenswrapper[4711]: I1202 10:16:19.095202 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-zrj4j" Dec 02 10:16:22 crc kubenswrapper[4711]: I1202 10:16:22.552717 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:16:22 crc kubenswrapper[4711]: I1202 10:16:22.587474 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:16:22 crc kubenswrapper[4711]: I1202 10:16:22.587552 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:16:22 crc kubenswrapper[4711]: I1202 10:16:22.607374 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d01871-cd3f-4ebf-84b2-f302db3e0452-kube-api-access\") pod \"a0d01871-cd3f-4ebf-84b2-f302db3e0452\" (UID: \"a0d01871-cd3f-4ebf-84b2-f302db3e0452\") " Dec 02 10:16:22 crc kubenswrapper[4711]: I1202 10:16:22.607647 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0d01871-cd3f-4ebf-84b2-f302db3e0452-kubelet-dir\") pod \"a0d01871-cd3f-4ebf-84b2-f302db3e0452\" (UID: \"a0d01871-cd3f-4ebf-84b2-f302db3e0452\") " Dec 02 10:16:22 crc kubenswrapper[4711]: I1202 10:16:22.607754 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0d01871-cd3f-4ebf-84b2-f302db3e0452-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a0d01871-cd3f-4ebf-84b2-f302db3e0452" (UID: "a0d01871-cd3f-4ebf-84b2-f302db3e0452"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:16:22 crc kubenswrapper[4711]: I1202 10:16:22.608029 4711 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0d01871-cd3f-4ebf-84b2-f302db3e0452-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:16:22 crc kubenswrapper[4711]: I1202 10:16:22.631910 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d01871-cd3f-4ebf-84b2-f302db3e0452-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a0d01871-cd3f-4ebf-84b2-f302db3e0452" (UID: "a0d01871-cd3f-4ebf-84b2-f302db3e0452"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:16:22 crc kubenswrapper[4711]: I1202 10:16:22.709329 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d01871-cd3f-4ebf-84b2-f302db3e0452-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:16:23 crc kubenswrapper[4711]: I1202 10:16:23.446892 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a0d01871-cd3f-4ebf-84b2-f302db3e0452","Type":"ContainerDied","Data":"ff891027b637ca8223a924b4d04fc6b4cb7d0b99d006b4602fd1083bcab77ae8"} Dec 02 10:16:23 crc kubenswrapper[4711]: I1202 10:16:23.447108 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff891027b637ca8223a924b4d04fc6b4cb7d0b99d006b4602fd1083bcab77ae8" Dec 02 10:16:23 crc kubenswrapper[4711]: I1202 10:16:23.447162 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:16:27 crc kubenswrapper[4711]: I1202 10:16:27.026670 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4m2lb" Dec 02 10:16:27 crc kubenswrapper[4711]: I1202 10:16:27.043934 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:16:27 crc kubenswrapper[4711]: I1202 10:16:27.058372 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:16:27 crc kubenswrapper[4711]: I1202 10:16:27.858381 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:16:37 crc kubenswrapper[4711]: I1202 10:16:37.543012 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jcqjc" Dec 02 10:16:39 crc kubenswrapper[4711]: I1202 10:16:39.413411 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:16:44 crc kubenswrapper[4711]: E1202 10:16:44.059726 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 02 10:16:44 crc kubenswrapper[4711]: E1202 10:16:44.060500 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r5jl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dghgm_openshift-marketplace(41802e85-ca3f-4296-85ce-84bc4f4169d0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:16:44 crc kubenswrapper[4711]: E1202 10:16:44.061767 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dghgm" podUID="41802e85-ca3f-4296-85ce-84bc4f4169d0" Dec 02 10:16:44 crc kubenswrapper[4711]: I1202 10:16:44.815300 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 10:16:44 crc kubenswrapper[4711]: E1202 10:16:44.815653 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d01871-cd3f-4ebf-84b2-f302db3e0452" containerName="pruner" Dec 02 10:16:44 crc kubenswrapper[4711]: I1202 10:16:44.815680 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d01871-cd3f-4ebf-84b2-f302db3e0452" containerName="pruner" Dec 02 10:16:44 crc kubenswrapper[4711]: I1202 10:16:44.815863 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d01871-cd3f-4ebf-84b2-f302db3e0452" containerName="pruner" Dec 02 10:16:44 crc kubenswrapper[4711]: I1202 10:16:44.816510 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:16:44 crc kubenswrapper[4711]: I1202 10:16:44.820227 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 10:16:44 crc kubenswrapper[4711]: I1202 10:16:44.820460 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 10:16:44 crc kubenswrapper[4711]: I1202 10:16:44.828106 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 10:16:44 crc kubenswrapper[4711]: I1202 10:16:44.987999 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c27c88-5a07-42e6-884a-de1988462f6e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"23c27c88-5a07-42e6-884a-de1988462f6e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:16:44 crc kubenswrapper[4711]: I1202 10:16:44.988100 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c27c88-5a07-42e6-884a-de1988462f6e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"23c27c88-5a07-42e6-884a-de1988462f6e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:16:45 crc kubenswrapper[4711]: I1202 10:16:45.089060 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c27c88-5a07-42e6-884a-de1988462f6e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"23c27c88-5a07-42e6-884a-de1988462f6e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:16:45 crc kubenswrapper[4711]: I1202 10:16:45.089332 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c27c88-5a07-42e6-884a-de1988462f6e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"23c27c88-5a07-42e6-884a-de1988462f6e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:16:45 crc kubenswrapper[4711]: I1202 10:16:45.089475 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c27c88-5a07-42e6-884a-de1988462f6e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"23c27c88-5a07-42e6-884a-de1988462f6e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:16:45 crc kubenswrapper[4711]: I1202 10:16:45.111113 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c27c88-5a07-42e6-884a-de1988462f6e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"23c27c88-5a07-42e6-884a-de1988462f6e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:16:45 crc kubenswrapper[4711]: I1202 10:16:45.145028 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:16:45 crc kubenswrapper[4711]: E1202 10:16:45.455573 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dghgm" podUID="41802e85-ca3f-4296-85ce-84bc4f4169d0" Dec 02 10:16:45 crc kubenswrapper[4711]: E1202 10:16:45.704576 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 02 10:16:45 crc kubenswrapper[4711]: E1202 10:16:45.704836 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gs8h8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bzwf5_openshift-marketplace(fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:16:45 crc kubenswrapper[4711]: E1202 10:16:45.706126 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bzwf5" podUID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" Dec 02 10:16:47 crc kubenswrapper[4711]: E1202 10:16:47.121901 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bzwf5" podUID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" Dec 02 10:16:47 crc kubenswrapper[4711]: E1202 10:16:47.215566 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 02 10:16:47 crc kubenswrapper[4711]: E1202 10:16:47.215743 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-728rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tb9pv_openshift-marketplace(ef385405-9334-4ba2-a7ed-abd9d51cbd5d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:16:47 crc kubenswrapper[4711]: E1202 10:16:47.217322 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tb9pv" podUID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.145756 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tb9pv" podUID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.211248 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.211463 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frsxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-grd8v_openshift-marketplace(0c729b20-2f40-43b6-8432-062f8a6cce37): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.212836 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-grd8v" podUID="0c729b20-2f40-43b6-8432-062f8a6cce37" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.255265 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.255760 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nfn9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7l5th_openshift-marketplace(d37a3481-62b0-42fd-b6c9-198f0e5aac93): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.257587 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7l5th" podUID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.291061 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.291216 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blq7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7fgpt_openshift-marketplace(a3c13742-8972-4320-8b08-ced2c55156d3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.292926 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-7fgpt" podUID="a3c13742-8972-4320-8b08-ced2c55156d3" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.313228 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.313389 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4swh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vs5vb_openshift-marketplace(38739913-4803-444c-b624-013235f6eec3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.314705 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vs5vb" podUID="38739913-4803-444c-b624-013235f6eec3" Dec 02 10:16:48 crc kubenswrapper[4711]: I1202 10:16:48.561787 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 10:16:48 crc kubenswrapper[4711]: I1202 10:16:48.572757 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brtmx" event={"ID":"226fe786-af06-4294-a1e5-7e1c4aa86551","Type":"ContainerStarted","Data":"fd379ff3b570d9f8f17e65ff1c4d20b24d67a72896908075aea60453e477413b"} Dec 02 10:16:48 crc kubenswrapper[4711]: W1202 10:16:48.573872 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod23c27c88_5a07_42e6_884a_de1988462f6e.slice/crio-60a42e757c0c375a6b89092c9d58192e2ff184c295ab486e5666af755f72aa6d WatchSource:0}: Error finding container 60a42e757c0c375a6b89092c9d58192e2ff184c295ab486e5666af755f72aa6d: Status 404 returned error can't find the container with id 60a42e757c0c375a6b89092c9d58192e2ff184c295ab486e5666af755f72aa6d Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.574449 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vs5vb" podUID="38739913-4803-444c-b624-013235f6eec3" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.574563 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7fgpt" podUID="a3c13742-8972-4320-8b08-ced2c55156d3" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.574570 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7l5th" podUID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" Dec 02 10:16:48 crc kubenswrapper[4711]: E1202 10:16:48.575710 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-grd8v" podUID="0c729b20-2f40-43b6-8432-062f8a6cce37" Dec 02 10:16:49 crc kubenswrapper[4711]: I1202 10:16:49.580147 4711 generic.go:334] "Generic (PLEG): container finished" podID="226fe786-af06-4294-a1e5-7e1c4aa86551" containerID="fd379ff3b570d9f8f17e65ff1c4d20b24d67a72896908075aea60453e477413b" exitCode=0 Dec 02 10:16:49 crc kubenswrapper[4711]: I1202 10:16:49.580231 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brtmx" event={"ID":"226fe786-af06-4294-a1e5-7e1c4aa86551","Type":"ContainerDied","Data":"fd379ff3b570d9f8f17e65ff1c4d20b24d67a72896908075aea60453e477413b"} Dec 02 10:16:49 crc kubenswrapper[4711]: I1202 10:16:49.582113 4711 generic.go:334] "Generic (PLEG): container finished" podID="23c27c88-5a07-42e6-884a-de1988462f6e" containerID="e7058b9cd100267266c5ca8d44f2f548deb312abc6a7368f207755efdcfef5ad" exitCode=0 Dec 02 10:16:49 crc kubenswrapper[4711]: I1202 10:16:49.582148 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"23c27c88-5a07-42e6-884a-de1988462f6e","Type":"ContainerDied","Data":"e7058b9cd100267266c5ca8d44f2f548deb312abc6a7368f207755efdcfef5ad"} Dec 02 10:16:49 crc kubenswrapper[4711]: I1202 10:16:49.582170 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"23c27c88-5a07-42e6-884a-de1988462f6e","Type":"ContainerStarted","Data":"60a42e757c0c375a6b89092c9d58192e2ff184c295ab486e5666af755f72aa6d"} Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.412578 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.428410 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.435864 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.558902 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/790393ef-9c83-4c71-a3cb-07d35a1e5f55-kubelet-dir\") pod \"installer-9-crc\" (UID: \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.558983 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/790393ef-9c83-4c71-a3cb-07d35a1e5f55-var-lock\") pod \"installer-9-crc\" (UID: \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.559143 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/790393ef-9c83-4c71-a3cb-07d35a1e5f55-kube-api-access\") pod \"installer-9-crc\" (UID: \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.590175 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brtmx" event={"ID":"226fe786-af06-4294-a1e5-7e1c4aa86551","Type":"ContainerStarted","Data":"9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11"} Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.609936 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-brtmx" podStartSLOduration=3.822678162 podStartE2EDuration="42.609880777s" podCreationTimestamp="2025-12-02 10:16:08 +0000 UTC" firstStartedPulling="2025-12-02 10:16:11.249827059 +0000 UTC m=+160.959193516" lastFinishedPulling="2025-12-02 10:16:50.037029684 +0000 UTC m=+199.746396131" observedRunningTime="2025-12-02 10:16:50.605746183 +0000 UTC m=+200.315112650" watchObservedRunningTime="2025-12-02 10:16:50.609880777 +0000 UTC m=+200.319247234" Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.660718 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/790393ef-9c83-4c71-a3cb-07d35a1e5f55-kube-api-access\") pod \"installer-9-crc\" (UID: \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.660763 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/790393ef-9c83-4c71-a3cb-07d35a1e5f55-kubelet-dir\") pod \"installer-9-crc\" (UID: \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.660788 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/790393ef-9c83-4c71-a3cb-07d35a1e5f55-var-lock\") pod \"installer-9-crc\" (UID: \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.660855 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/790393ef-9c83-4c71-a3cb-07d35a1e5f55-var-lock\") pod \"installer-9-crc\" (UID: \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.661249 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/790393ef-9c83-4c71-a3cb-07d35a1e5f55-kubelet-dir\") pod \"installer-9-crc\" (UID: \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.685919 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/790393ef-9c83-4c71-a3cb-07d35a1e5f55-kube-api-access\") pod \"installer-9-crc\" (UID: \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.750131 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:16:50 crc kubenswrapper[4711]: I1202 10:16:50.912872 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.071229 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c27c88-5a07-42e6-884a-de1988462f6e-kube-api-access\") pod \"23c27c88-5a07-42e6-884a-de1988462f6e\" (UID: \"23c27c88-5a07-42e6-884a-de1988462f6e\") " Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.071494 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c27c88-5a07-42e6-884a-de1988462f6e-kubelet-dir\") pod \"23c27c88-5a07-42e6-884a-de1988462f6e\" (UID: \"23c27c88-5a07-42e6-884a-de1988462f6e\") " Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.071997 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23c27c88-5a07-42e6-884a-de1988462f6e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "23c27c88-5a07-42e6-884a-de1988462f6e" (UID: "23c27c88-5a07-42e6-884a-de1988462f6e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.086079 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c27c88-5a07-42e6-884a-de1988462f6e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "23c27c88-5a07-42e6-884a-de1988462f6e" (UID: "23c27c88-5a07-42e6-884a-de1988462f6e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.166156 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.173745 4711 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23c27c88-5a07-42e6-884a-de1988462f6e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.173783 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c27c88-5a07-42e6-884a-de1988462f6e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:16:51 crc kubenswrapper[4711]: W1202 10:16:51.178914 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod790393ef_9c83_4c71_a3cb_07d35a1e5f55.slice/crio-efe6fd2933cd87d4d7010dd7f0395dea072718900ca1baeac2eadb396a3a7a4f WatchSource:0}: Error finding container efe6fd2933cd87d4d7010dd7f0395dea072718900ca1baeac2eadb396a3a7a4f: Status 404 returned error can't find the container with id efe6fd2933cd87d4d7010dd7f0395dea072718900ca1baeac2eadb396a3a7a4f Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.597401 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"23c27c88-5a07-42e6-884a-de1988462f6e","Type":"ContainerDied","Data":"60a42e757c0c375a6b89092c9d58192e2ff184c295ab486e5666af755f72aa6d"} Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.597427 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.597488 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a42e757c0c375a6b89092c9d58192e2ff184c295ab486e5666af755f72aa6d" Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.600503 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"790393ef-9c83-4c71-a3cb-07d35a1e5f55","Type":"ContainerStarted","Data":"1d677cc14ae50a1afcfca157402137dd5386f54bf5bfc9dac5f8fdaf81e802b9"} Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.600612 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"790393ef-9c83-4c71-a3cb-07d35a1e5f55","Type":"ContainerStarted","Data":"efe6fd2933cd87d4d7010dd7f0395dea072718900ca1baeac2eadb396a3a7a4f"} Dec 02 10:16:51 crc kubenswrapper[4711]: I1202 10:16:51.624907 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.624889121 podStartE2EDuration="1.624889121s" podCreationTimestamp="2025-12-02 10:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:16:51.620792751 +0000 UTC m=+201.330159198" watchObservedRunningTime="2025-12-02 10:16:51.624889121 +0000 UTC m=+201.334255568" Dec 02 10:16:52 crc kubenswrapper[4711]: I1202 10:16:52.586430 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:16:52 crc kubenswrapper[4711]: I1202 10:16:52.587031 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:16:58 crc kubenswrapper[4711]: I1202 10:16:58.946806 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:58 crc kubenswrapper[4711]: I1202 10:16:58.947543 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:59 crc kubenswrapper[4711]: I1202 10:16:59.025561 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:16:59 crc kubenswrapper[4711]: I1202 10:16:59.687983 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:17:00 crc kubenswrapper[4711]: I1202 10:17:00.653292 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fgpt" event={"ID":"a3c13742-8972-4320-8b08-ced2c55156d3","Type":"ContainerStarted","Data":"703786e75f03f384e169484bfed3ffb32fe1a1d8c600363db507688c75d90b98"} Dec 02 10:17:00 crc kubenswrapper[4711]: I1202 10:17:00.655929 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwf5" event={"ID":"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb","Type":"ContainerStarted","Data":"aa00950bfe44c2caf2780563df003e784560612550204d741f3577f865dcd38b"} Dec 02 10:17:01 crc kubenswrapper[4711]: I1202 10:17:01.665526 4711 generic.go:334] "Generic (PLEG): container finished" podID="41802e85-ca3f-4296-85ce-84bc4f4169d0" containerID="e886cefe8d4d019ba6097cf37c3e8c940687a1bf408add8a578c8dbd692e3958" exitCode=0 Dec 02 10:17:01 crc kubenswrapper[4711]: I1202 10:17:01.665581 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghgm" event={"ID":"41802e85-ca3f-4296-85ce-84bc4f4169d0","Type":"ContainerDied","Data":"e886cefe8d4d019ba6097cf37c3e8c940687a1bf408add8a578c8dbd692e3958"} Dec 02 10:17:01 crc kubenswrapper[4711]: I1202 10:17:01.669294 4711 generic.go:334] "Generic (PLEG): container finished" podID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" containerID="aa00950bfe44c2caf2780563df003e784560612550204d741f3577f865dcd38b" exitCode=0 Dec 02 10:17:01 crc kubenswrapper[4711]: I1202 10:17:01.669367 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwf5" event={"ID":"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb","Type":"ContainerDied","Data":"aa00950bfe44c2caf2780563df003e784560612550204d741f3577f865dcd38b"} Dec 02 10:17:01 crc kubenswrapper[4711]: I1202 10:17:01.674614 4711 generic.go:334] "Generic (PLEG): container finished" podID="a3c13742-8972-4320-8b08-ced2c55156d3" containerID="703786e75f03f384e169484bfed3ffb32fe1a1d8c600363db507688c75d90b98" exitCode=0 Dec 02 10:17:01 crc kubenswrapper[4711]: I1202 10:17:01.674735 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fgpt" event={"ID":"a3c13742-8972-4320-8b08-ced2c55156d3","Type":"ContainerDied","Data":"703786e75f03f384e169484bfed3ffb32fe1a1d8c600363db507688c75d90b98"} Dec 02 10:17:01 crc kubenswrapper[4711]: I1202 10:17:01.684109 4711 generic.go:334] "Generic (PLEG): container finished" podID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" containerID="6b7ebab21c4afeccd07bf8f26c12e9b067bfdd7b240a81bd27a9ab0c035353d1" exitCode=0 Dec 02 10:17:01 crc kubenswrapper[4711]: I1202 10:17:01.684167 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb9pv" event={"ID":"ef385405-9334-4ba2-a7ed-abd9d51cbd5d","Type":"ContainerDied","Data":"6b7ebab21c4afeccd07bf8f26c12e9b067bfdd7b240a81bd27a9ab0c035353d1"} Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.059977 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-brtmx"] Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.060323 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-brtmx" podUID="226fe786-af06-4294-a1e5-7e1c4aa86551" containerName="registry-server" containerID="cri-o://9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11" gracePeriod=2 Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.464377 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.640705 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/226fe786-af06-4294-a1e5-7e1c4aa86551-catalog-content\") pod \"226fe786-af06-4294-a1e5-7e1c4aa86551\" (UID: \"226fe786-af06-4294-a1e5-7e1c4aa86551\") " Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.641130 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v6mx\" (UniqueName: \"kubernetes.io/projected/226fe786-af06-4294-a1e5-7e1c4aa86551-kube-api-access-6v6mx\") pod \"226fe786-af06-4294-a1e5-7e1c4aa86551\" (UID: \"226fe786-af06-4294-a1e5-7e1c4aa86551\") " Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.641184 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/226fe786-af06-4294-a1e5-7e1c4aa86551-utilities\") pod \"226fe786-af06-4294-a1e5-7e1c4aa86551\" (UID: \"226fe786-af06-4294-a1e5-7e1c4aa86551\") " Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.642007 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/226fe786-af06-4294-a1e5-7e1c4aa86551-utilities" (OuterVolumeSpecName: "utilities") pod "226fe786-af06-4294-a1e5-7e1c4aa86551" (UID: "226fe786-af06-4294-a1e5-7e1c4aa86551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.646182 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226fe786-af06-4294-a1e5-7e1c4aa86551-kube-api-access-6v6mx" (OuterVolumeSpecName: "kube-api-access-6v6mx") pod "226fe786-af06-4294-a1e5-7e1c4aa86551" (UID: "226fe786-af06-4294-a1e5-7e1c4aa86551"). InnerVolumeSpecName "kube-api-access-6v6mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.663001 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/226fe786-af06-4294-a1e5-7e1c4aa86551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "226fe786-af06-4294-a1e5-7e1c4aa86551" (UID: "226fe786-af06-4294-a1e5-7e1c4aa86551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.690990 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghgm" event={"ID":"41802e85-ca3f-4296-85ce-84bc4f4169d0","Type":"ContainerStarted","Data":"b11901c06214cb48c7689562ec098322ebd8f70efcfc731596f6beade240ad4b"} Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.692823 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwf5" event={"ID":"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb","Type":"ContainerStarted","Data":"6bb831207ce722686e67f328f4161321da80653864d97d1c02e285fa196310d4"} Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.695147 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fgpt" event={"ID":"a3c13742-8972-4320-8b08-ced2c55156d3","Type":"ContainerStarted","Data":"d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f"} Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.697167 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb9pv" event={"ID":"ef385405-9334-4ba2-a7ed-abd9d51cbd5d","Type":"ContainerStarted","Data":"cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539"} Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.699417 4711 generic.go:334] "Generic (PLEG): container finished" podID="226fe786-af06-4294-a1e5-7e1c4aa86551" containerID="9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11" exitCode=0 Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.699459 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brtmx" event={"ID":"226fe786-af06-4294-a1e5-7e1c4aa86551","Type":"ContainerDied","Data":"9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11"} Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.699480 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brtmx" event={"ID":"226fe786-af06-4294-a1e5-7e1c4aa86551","Type":"ContainerDied","Data":"d16272ffd3910a63afbb40e68c704cf5c70fe646c23a0f8865a1a465ec985273"} Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.699489 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brtmx" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.699668 4711 scope.go:117] "RemoveContainer" containerID="9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.712531 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dghgm" podStartSLOduration=2.6435068299999998 podStartE2EDuration="53.712475558s" podCreationTimestamp="2025-12-02 10:16:09 +0000 UTC" firstStartedPulling="2025-12-02 10:16:11.251011351 +0000 UTC m=+160.960377798" lastFinishedPulling="2025-12-02 10:17:02.319980079 +0000 UTC m=+212.029346526" observedRunningTime="2025-12-02 10:17:02.709242271 +0000 UTC m=+212.418608718" watchObservedRunningTime="2025-12-02 10:17:02.712475558 +0000 UTC m=+212.421842005" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.714384 4711 scope.go:117] "RemoveContainer" containerID="fd379ff3b570d9f8f17e65ff1c4d20b24d67a72896908075aea60453e477413b" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.729736 4711 scope.go:117] "RemoveContainer" containerID="2d4c9edee327c65db2186d751967137eae2faf9998ca8a08a1b3aa920bd511d0" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.740249 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7fgpt" podStartSLOduration=2.750068536 podStartE2EDuration="53.740223164s" podCreationTimestamp="2025-12-02 10:16:09 +0000 UTC" firstStartedPulling="2025-12-02 10:16:11.255268307 +0000 UTC m=+160.964634764" lastFinishedPulling="2025-12-02 10:17:02.245422945 +0000 UTC m=+211.954789392" observedRunningTime="2025-12-02 10:17:02.735969049 +0000 UTC m=+212.445335496" watchObservedRunningTime="2025-12-02 10:17:02.740223164 +0000 UTC m=+212.449589611" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.742606 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v6mx\" (UniqueName: \"kubernetes.io/projected/226fe786-af06-4294-a1e5-7e1c4aa86551-kube-api-access-6v6mx\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.742639 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/226fe786-af06-4294-a1e5-7e1c4aa86551-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.742658 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/226fe786-af06-4294-a1e5-7e1c4aa86551-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.753751 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tb9pv" podStartSLOduration=3.718035892 podStartE2EDuration="56.753729477s" podCreationTimestamp="2025-12-02 10:16:06 +0000 UTC" firstStartedPulling="2025-12-02 10:16:09.075001209 +0000 UTC m=+158.784367656" lastFinishedPulling="2025-12-02 10:17:02.110694794 +0000 UTC m=+211.820061241" observedRunningTime="2025-12-02 10:17:02.75124154 +0000 UTC m=+212.460607987" watchObservedRunningTime="2025-12-02 10:17:02.753729477 +0000 UTC m=+212.463095924" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.760083 4711 scope.go:117] "RemoveContainer" containerID="9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11" Dec 02 10:17:02 crc kubenswrapper[4711]: E1202 10:17:02.766139 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11\": container with ID starting with 9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11 not found: ID does not exist" containerID="9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.766194 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11"} err="failed to get container status \"9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11\": rpc error: code = NotFound desc = could not find container \"9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11\": container with ID starting with 9585392eb5522814cdaab3f9715ad4a7383eb4e0c5b3ecac8c64628b54d11b11 not found: ID does not exist" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.766247 4711 scope.go:117] "RemoveContainer" containerID="fd379ff3b570d9f8f17e65ff1c4d20b24d67a72896908075aea60453e477413b" Dec 02 10:17:02 crc kubenswrapper[4711]: E1202 10:17:02.766710 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd379ff3b570d9f8f17e65ff1c4d20b24d67a72896908075aea60453e477413b\": container with ID starting with fd379ff3b570d9f8f17e65ff1c4d20b24d67a72896908075aea60453e477413b not found: ID does not exist" containerID="fd379ff3b570d9f8f17e65ff1c4d20b24d67a72896908075aea60453e477413b" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.766765 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd379ff3b570d9f8f17e65ff1c4d20b24d67a72896908075aea60453e477413b"} err="failed to get container status \"fd379ff3b570d9f8f17e65ff1c4d20b24d67a72896908075aea60453e477413b\": rpc error: code = NotFound desc = could not find container \"fd379ff3b570d9f8f17e65ff1c4d20b24d67a72896908075aea60453e477413b\": container with ID starting with fd379ff3b570d9f8f17e65ff1c4d20b24d67a72896908075aea60453e477413b not found: ID does not exist" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.766788 4711 scope.go:117] "RemoveContainer" containerID="2d4c9edee327c65db2186d751967137eae2faf9998ca8a08a1b3aa920bd511d0" Dec 02 10:17:02 crc kubenswrapper[4711]: E1202 10:17:02.767019 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4c9edee327c65db2186d751967137eae2faf9998ca8a08a1b3aa920bd511d0\": container with ID starting with 2d4c9edee327c65db2186d751967137eae2faf9998ca8a08a1b3aa920bd511d0 not found: ID does not exist" containerID="2d4c9edee327c65db2186d751967137eae2faf9998ca8a08a1b3aa920bd511d0" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.767053 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4c9edee327c65db2186d751967137eae2faf9998ca8a08a1b3aa920bd511d0"} err="failed to get container status \"2d4c9edee327c65db2186d751967137eae2faf9998ca8a08a1b3aa920bd511d0\": rpc error: code = NotFound desc = could not find container \"2d4c9edee327c65db2186d751967137eae2faf9998ca8a08a1b3aa920bd511d0\": container with ID starting with 2d4c9edee327c65db2186d751967137eae2faf9998ca8a08a1b3aa920bd511d0 not found: ID does not exist" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.789494 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bzwf5" podStartSLOduration=3.77665238 podStartE2EDuration="56.789475048s" podCreationTimestamp="2025-12-02 10:16:06 +0000 UTC" firstStartedPulling="2025-12-02 10:16:09.130424991 +0000 UTC m=+158.839791438" lastFinishedPulling="2025-12-02 10:17:02.143247669 +0000 UTC m=+211.852614106" observedRunningTime="2025-12-02 10:17:02.77917554 +0000 UTC m=+212.488541987" watchObservedRunningTime="2025-12-02 10:17:02.789475048 +0000 UTC m=+212.498841495" Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.791337 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-brtmx"] Dec 02 10:17:02 crc kubenswrapper[4711]: I1202 10:17:02.794194 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-brtmx"] Dec 02 10:17:03 crc kubenswrapper[4711]: I1202 10:17:03.110926 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226fe786-af06-4294-a1e5-7e1c4aa86551" path="/var/lib/kubelet/pods/226fe786-af06-4294-a1e5-7e1c4aa86551/volumes" Dec 02 10:17:03 crc kubenswrapper[4711]: I1202 10:17:03.706578 4711 generic.go:334] "Generic (PLEG): container finished" podID="0c729b20-2f40-43b6-8432-062f8a6cce37" containerID="20e00834afa981dc79ef10514624d4608ba8fefdd64b8d3b3309b6b4aab257bc" exitCode=0 Dec 02 10:17:03 crc kubenswrapper[4711]: I1202 10:17:03.706666 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grd8v" event={"ID":"0c729b20-2f40-43b6-8432-062f8a6cce37","Type":"ContainerDied","Data":"20e00834afa981dc79ef10514624d4608ba8fefdd64b8d3b3309b6b4aab257bc"} Dec 02 10:17:04 crc kubenswrapper[4711]: I1202 10:17:04.722516 4711 generic.go:334] "Generic (PLEG): container finished" podID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" containerID="5b651b247fff1bae71930156835ee9077f0a88f19807b275ccffa3b0951ea3ca" exitCode=0 Dec 02 10:17:04 crc kubenswrapper[4711]: I1202 10:17:04.722630 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7l5th" event={"ID":"d37a3481-62b0-42fd-b6c9-198f0e5aac93","Type":"ContainerDied","Data":"5b651b247fff1bae71930156835ee9077f0a88f19807b275ccffa3b0951ea3ca"} Dec 02 10:17:04 crc kubenswrapper[4711]: I1202 10:17:04.727632 4711 generic.go:334] "Generic (PLEG): container finished" podID="38739913-4803-444c-b624-013235f6eec3" containerID="11ee6f7b0de711a282ef6b383e66a615833a03162eea734633495d850ae45b47" exitCode=0 Dec 02 10:17:04 crc kubenswrapper[4711]: I1202 10:17:04.727658 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs5vb" event={"ID":"38739913-4803-444c-b624-013235f6eec3","Type":"ContainerDied","Data":"11ee6f7b0de711a282ef6b383e66a615833a03162eea734633495d850ae45b47"} Dec 02 10:17:04 crc kubenswrapper[4711]: I1202 10:17:04.735706 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grd8v" event={"ID":"0c729b20-2f40-43b6-8432-062f8a6cce37","Type":"ContainerStarted","Data":"19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7"} Dec 02 10:17:04 crc kubenswrapper[4711]: I1202 10:17:04.761283 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-grd8v" podStartSLOduration=2.576571694 podStartE2EDuration="56.761261533s" podCreationTimestamp="2025-12-02 10:16:08 +0000 UTC" firstStartedPulling="2025-12-02 10:16:10.232599148 +0000 UTC m=+159.941965595" lastFinishedPulling="2025-12-02 10:17:04.417288987 +0000 UTC m=+214.126655434" observedRunningTime="2025-12-02 10:17:04.759465335 +0000 UTC m=+214.468831802" watchObservedRunningTime="2025-12-02 10:17:04.761261533 +0000 UTC m=+214.470627980" Dec 02 10:17:05 crc kubenswrapper[4711]: I1202 10:17:05.742308 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7l5th" event={"ID":"d37a3481-62b0-42fd-b6c9-198f0e5aac93","Type":"ContainerStarted","Data":"d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58"} Dec 02 10:17:05 crc kubenswrapper[4711]: I1202 10:17:05.744976 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs5vb" event={"ID":"38739913-4803-444c-b624-013235f6eec3","Type":"ContainerStarted","Data":"3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234"} Dec 02 10:17:05 crc kubenswrapper[4711]: I1202 10:17:05.761916 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7l5th" podStartSLOduration=3.622478149 podStartE2EDuration="59.761894327s" podCreationTimestamp="2025-12-02 10:16:06 +0000 UTC" firstStartedPulling="2025-12-02 10:16:09.088163378 +0000 UTC m=+158.797529845" lastFinishedPulling="2025-12-02 10:17:05.227579576 +0000 UTC m=+214.936946023" observedRunningTime="2025-12-02 10:17:05.758910937 +0000 UTC m=+215.468277384" watchObservedRunningTime="2025-12-02 10:17:05.761894327 +0000 UTC m=+215.471260774" Dec 02 10:17:05 crc kubenswrapper[4711]: I1202 10:17:05.789195 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vs5vb" podStartSLOduration=3.678804552 podStartE2EDuration="59.789174349s" podCreationTimestamp="2025-12-02 10:16:06 +0000 UTC" firstStartedPulling="2025-12-02 10:16:09.087383517 +0000 UTC m=+158.796749964" lastFinishedPulling="2025-12-02 10:17:05.197753314 +0000 UTC m=+214.907119761" observedRunningTime="2025-12-02 10:17:05.787652149 +0000 UTC m=+215.497018596" watchObservedRunningTime="2025-12-02 10:17:05.789174349 +0000 UTC m=+215.498540796" Dec 02 10:17:06 crc kubenswrapper[4711]: I1202 10:17:06.852868 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:17:06 crc kubenswrapper[4711]: I1202 10:17:06.852938 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:17:06 crc kubenswrapper[4711]: I1202 10:17:06.889676 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:17:06 crc kubenswrapper[4711]: I1202 10:17:06.889760 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:17:06 crc kubenswrapper[4711]: I1202 10:17:06.899401 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:17:07 crc kubenswrapper[4711]: I1202 10:17:07.136244 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:17:07 crc kubenswrapper[4711]: I1202 10:17:07.137074 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:17:07 crc kubenswrapper[4711]: I1202 10:17:07.183664 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:17:07 crc kubenswrapper[4711]: I1202 10:17:07.250792 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:17:07 crc kubenswrapper[4711]: I1202 10:17:07.251163 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:17:07 crc kubenswrapper[4711]: I1202 10:17:07.289890 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:17:07 crc kubenswrapper[4711]: I1202 10:17:07.796493 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:17:07 crc kubenswrapper[4711]: I1202 10:17:07.798034 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:17:07 crc kubenswrapper[4711]: I1202 10:17:07.931830 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7l5th" podUID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" containerName="registry-server" probeResult="failure" output=< Dec 02 10:17:07 crc kubenswrapper[4711]: timeout: failed to connect service ":50051" within 1s Dec 02 10:17:07 crc kubenswrapper[4711]: > Dec 02 10:17:08 crc kubenswrapper[4711]: I1202 10:17:08.537492 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:17:08 crc kubenswrapper[4711]: I1202 10:17:08.537557 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:17:08 crc kubenswrapper[4711]: I1202 10:17:08.572806 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:17:08 crc kubenswrapper[4711]: I1202 10:17:08.655586 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzwf5"] Dec 02 10:17:09 crc kubenswrapper[4711]: I1202 10:17:09.764974 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bzwf5" podUID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" containerName="registry-server" containerID="cri-o://6bb831207ce722686e67f328f4161321da80653864d97d1c02e285fa196310d4" gracePeriod=2 Dec 02 10:17:09 crc kubenswrapper[4711]: I1202 10:17:09.837223 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:17:09 crc kubenswrapper[4711]: I1202 10:17:09.837573 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:17:09 crc kubenswrapper[4711]: I1202 10:17:09.899530 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:17:10 crc kubenswrapper[4711]: I1202 10:17:10.108835 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:17:10 crc kubenswrapper[4711]: I1202 10:17:10.109244 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:17:10 crc kubenswrapper[4711]: I1202 10:17:10.151605 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:17:10 crc kubenswrapper[4711]: I1202 10:17:10.830703 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:17:10 crc kubenswrapper[4711]: I1202 10:17:10.831328 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:17:12 crc kubenswrapper[4711]: I1202 10:17:12.784814 4711 generic.go:334] "Generic (PLEG): container finished" podID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" containerID="6bb831207ce722686e67f328f4161321da80653864d97d1c02e285fa196310d4" exitCode=0 Dec 02 10:17:12 crc kubenswrapper[4711]: I1202 10:17:12.785025 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwf5" event={"ID":"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb","Type":"ContainerDied","Data":"6bb831207ce722686e67f328f4161321da80653864d97d1c02e285fa196310d4"} Dec 02 10:17:12 crc kubenswrapper[4711]: I1202 10:17:12.785458 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwf5" event={"ID":"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb","Type":"ContainerDied","Data":"1300ce6053f2760af8f660e0ddc04728eca159f8c1866e27e7a5b50e66a9c2a0"} Dec 02 10:17:12 crc kubenswrapper[4711]: I1202 10:17:12.785476 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1300ce6053f2760af8f660e0ddc04728eca159f8c1866e27e7a5b50e66a9c2a0" Dec 02 10:17:12 crc kubenswrapper[4711]: I1202 10:17:12.797204 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:17:12 crc kubenswrapper[4711]: I1202 10:17:12.979858 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-utilities\") pod \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\" (UID: \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\") " Dec 02 10:17:12 crc kubenswrapper[4711]: I1202 10:17:12.979915 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-catalog-content\") pod \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\" (UID: \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\") " Dec 02 10:17:12 crc kubenswrapper[4711]: I1202 10:17:12.980013 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs8h8\" (UniqueName: \"kubernetes.io/projected/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-kube-api-access-gs8h8\") pod \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\" (UID: \"fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb\") " Dec 02 10:17:12 crc kubenswrapper[4711]: I1202 10:17:12.980663 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-utilities" (OuterVolumeSpecName: "utilities") pod "fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" (UID: "fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:17:12 crc kubenswrapper[4711]: I1202 10:17:12.988564 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-kube-api-access-gs8h8" (OuterVolumeSpecName: "kube-api-access-gs8h8") pod "fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" (UID: "fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb"). InnerVolumeSpecName "kube-api-access-gs8h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:17:13 crc kubenswrapper[4711]: I1202 10:17:13.056439 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" (UID: "fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:17:13 crc kubenswrapper[4711]: I1202 10:17:13.058028 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dghgm"] Dec 02 10:17:13 crc kubenswrapper[4711]: I1202 10:17:13.083778 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs8h8\" (UniqueName: \"kubernetes.io/projected/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-kube-api-access-gs8h8\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:13 crc kubenswrapper[4711]: I1202 10:17:13.083834 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:13 crc kubenswrapper[4711]: I1202 10:17:13.083864 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:13 crc kubenswrapper[4711]: I1202 10:17:13.792219 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dghgm" podUID="41802e85-ca3f-4296-85ce-84bc4f4169d0" containerName="registry-server" containerID="cri-o://b11901c06214cb48c7689562ec098322ebd8f70efcfc731596f6beade240ad4b" gracePeriod=2 Dec 02 10:17:13 crc kubenswrapper[4711]: I1202 10:17:13.793051 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzwf5" Dec 02 10:17:13 crc kubenswrapper[4711]: I1202 10:17:13.827208 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzwf5"] Dec 02 10:17:13 crc kubenswrapper[4711]: I1202 10:17:13.831228 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bzwf5"] Dec 02 10:17:15 crc kubenswrapper[4711]: I1202 10:17:15.087036 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" path="/var/lib/kubelet/pods/fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb/volumes" Dec 02 10:17:15 crc kubenswrapper[4711]: I1202 10:17:15.814990 4711 generic.go:334] "Generic (PLEG): container finished" podID="41802e85-ca3f-4296-85ce-84bc4f4169d0" containerID="b11901c06214cb48c7689562ec098322ebd8f70efcfc731596f6beade240ad4b" exitCode=0 Dec 02 10:17:15 crc kubenswrapper[4711]: I1202 10:17:15.815145 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghgm" event={"ID":"41802e85-ca3f-4296-85ce-84bc4f4169d0","Type":"ContainerDied","Data":"b11901c06214cb48c7689562ec098322ebd8f70efcfc731596f6beade240ad4b"} Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.274079 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.333163 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41802e85-ca3f-4296-85ce-84bc4f4169d0-utilities\") pod \"41802e85-ca3f-4296-85ce-84bc4f4169d0\" (UID: \"41802e85-ca3f-4296-85ce-84bc4f4169d0\") " Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.333310 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41802e85-ca3f-4296-85ce-84bc4f4169d0-catalog-content\") pod \"41802e85-ca3f-4296-85ce-84bc4f4169d0\" (UID: \"41802e85-ca3f-4296-85ce-84bc4f4169d0\") " Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.333436 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r5jl\" (UniqueName: \"kubernetes.io/projected/41802e85-ca3f-4296-85ce-84bc4f4169d0-kube-api-access-2r5jl\") pod \"41802e85-ca3f-4296-85ce-84bc4f4169d0\" (UID: \"41802e85-ca3f-4296-85ce-84bc4f4169d0\") " Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.334365 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41802e85-ca3f-4296-85ce-84bc4f4169d0-utilities" (OuterVolumeSpecName: "utilities") pod "41802e85-ca3f-4296-85ce-84bc4f4169d0" (UID: "41802e85-ca3f-4296-85ce-84bc4f4169d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.338739 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41802e85-ca3f-4296-85ce-84bc4f4169d0-kube-api-access-2r5jl" (OuterVolumeSpecName: "kube-api-access-2r5jl") pod "41802e85-ca3f-4296-85ce-84bc4f4169d0" (UID: "41802e85-ca3f-4296-85ce-84bc4f4169d0"). InnerVolumeSpecName "kube-api-access-2r5jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.435704 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41802e85-ca3f-4296-85ce-84bc4f4169d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.435744 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r5jl\" (UniqueName: \"kubernetes.io/projected/41802e85-ca3f-4296-85ce-84bc4f4169d0-kube-api-access-2r5jl\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.447102 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41802e85-ca3f-4296-85ce-84bc4f4169d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41802e85-ca3f-4296-85ce-84bc4f4169d0" (UID: "41802e85-ca3f-4296-85ce-84bc4f4169d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.537103 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41802e85-ca3f-4296-85ce-84bc4f4169d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.822881 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghgm" event={"ID":"41802e85-ca3f-4296-85ce-84bc4f4169d0","Type":"ContainerDied","Data":"158f6681fa7f774fb67849d77416b0bf260886ed1ed6534b4c7bdf424bc7dd74"} Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.823298 4711 scope.go:117] "RemoveContainer" containerID="b11901c06214cb48c7689562ec098322ebd8f70efcfc731596f6beade240ad4b" Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.823029 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dghgm" Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.840094 4711 scope.go:117] "RemoveContainer" containerID="e886cefe8d4d019ba6097cf37c3e8c940687a1bf408add8a578c8dbd692e3958" Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.852476 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dghgm"] Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.857218 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dghgm"] Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.866614 4711 scope.go:117] "RemoveContainer" containerID="c2fd1714ee0e78976931ca23279d24d6e489204760621f33ed543dd9a8d0657a" Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.925732 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:17:16 crc kubenswrapper[4711]: I1202 10:17:16.968824 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:17:17 crc kubenswrapper[4711]: I1202 10:17:17.085117 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41802e85-ca3f-4296-85ce-84bc4f4169d0" path="/var/lib/kubelet/pods/41802e85-ca3f-4296-85ce-84bc4f4169d0/volumes" Dec 02 10:17:17 crc kubenswrapper[4711]: I1202 10:17:17.171668 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:17:18 crc kubenswrapper[4711]: I1202 10:17:18.571798 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:17:20 crc kubenswrapper[4711]: I1202 10:17:20.860162 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vs5vb"] Dec 02 10:17:20 crc kubenswrapper[4711]: I1202 10:17:20.860759 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vs5vb" podUID="38739913-4803-444c-b624-013235f6eec3" containerName="registry-server" containerID="cri-o://3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234" gracePeriod=2 Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.363418 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.527292 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4swh\" (UniqueName: \"kubernetes.io/projected/38739913-4803-444c-b624-013235f6eec3-kube-api-access-s4swh\") pod \"38739913-4803-444c-b624-013235f6eec3\" (UID: \"38739913-4803-444c-b624-013235f6eec3\") " Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.527411 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38739913-4803-444c-b624-013235f6eec3-catalog-content\") pod \"38739913-4803-444c-b624-013235f6eec3\" (UID: \"38739913-4803-444c-b624-013235f6eec3\") " Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.527450 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38739913-4803-444c-b624-013235f6eec3-utilities\") pod \"38739913-4803-444c-b624-013235f6eec3\" (UID: \"38739913-4803-444c-b624-013235f6eec3\") " Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.528758 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38739913-4803-444c-b624-013235f6eec3-utilities" (OuterVolumeSpecName: "utilities") pod "38739913-4803-444c-b624-013235f6eec3" (UID: "38739913-4803-444c-b624-013235f6eec3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.536486 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38739913-4803-444c-b624-013235f6eec3-kube-api-access-s4swh" (OuterVolumeSpecName: "kube-api-access-s4swh") pod "38739913-4803-444c-b624-013235f6eec3" (UID: "38739913-4803-444c-b624-013235f6eec3"). InnerVolumeSpecName "kube-api-access-s4swh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.585932 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.586124 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.586197 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.586934 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.587068 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f" gracePeriod=600 Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.612744 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38739913-4803-444c-b624-013235f6eec3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38739913-4803-444c-b624-013235f6eec3" (UID: "38739913-4803-444c-b624-013235f6eec3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.629600 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38739913-4803-444c-b624-013235f6eec3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.629646 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38739913-4803-444c-b624-013235f6eec3-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.629656 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4swh\" (UniqueName: \"kubernetes.io/projected/38739913-4803-444c-b624-013235f6eec3-kube-api-access-s4swh\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.862743 4711 generic.go:334] "Generic (PLEG): container finished" podID="38739913-4803-444c-b624-013235f6eec3" containerID="3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234" exitCode=0 Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.862797 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vs5vb" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.862846 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs5vb" event={"ID":"38739913-4803-444c-b624-013235f6eec3","Type":"ContainerDied","Data":"3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234"} Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.862874 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs5vb" event={"ID":"38739913-4803-444c-b624-013235f6eec3","Type":"ContainerDied","Data":"72d969900a7fc329ac9d76d8bbf5be6d20d70402a77bf4dd2f7f8f66f5cc1a8e"} Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.862891 4711 scope.go:117] "RemoveContainer" containerID="3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.865183 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f" exitCode=0 Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.865214 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f"} Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.865237 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"1e120b0a0cbb6d1edafb4930cf20f647e52a7929050bec77e5ac0b462823f904"} Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.875888 4711 scope.go:117] "RemoveContainer" containerID="11ee6f7b0de711a282ef6b383e66a615833a03162eea734633495d850ae45b47" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.896537 4711 scope.go:117] "RemoveContainer" containerID="9d29999d337e1f7b64bd00f063a2153b2b633c10157b9e893fbb285c63b86ee7" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.905044 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vs5vb"] Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.909158 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vs5vb"] Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.914679 4711 scope.go:117] "RemoveContainer" containerID="3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234" Dec 02 10:17:22 crc kubenswrapper[4711]: E1202 10:17:22.915263 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234\": container with ID starting with 3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234 not found: ID does not exist" containerID="3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.915328 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234"} err="failed to get container status \"3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234\": rpc error: code = NotFound desc = could not find container \"3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234\": container with ID starting with 3eba2cdf82f089677a8c8df74ad9da9be507257d753461204384c019386f7234 not found: ID does not exist" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.915365 4711 scope.go:117] "RemoveContainer" containerID="11ee6f7b0de711a282ef6b383e66a615833a03162eea734633495d850ae45b47" Dec 02 10:17:22 crc kubenswrapper[4711]: E1202 10:17:22.915797 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11ee6f7b0de711a282ef6b383e66a615833a03162eea734633495d850ae45b47\": container with ID starting with 11ee6f7b0de711a282ef6b383e66a615833a03162eea734633495d850ae45b47 not found: ID does not exist" containerID="11ee6f7b0de711a282ef6b383e66a615833a03162eea734633495d850ae45b47" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.915825 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ee6f7b0de711a282ef6b383e66a615833a03162eea734633495d850ae45b47"} err="failed to get container status \"11ee6f7b0de711a282ef6b383e66a615833a03162eea734633495d850ae45b47\": rpc error: code = NotFound desc = could not find container \"11ee6f7b0de711a282ef6b383e66a615833a03162eea734633495d850ae45b47\": container with ID starting with 11ee6f7b0de711a282ef6b383e66a615833a03162eea734633495d850ae45b47 not found: ID does not exist" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.915842 4711 scope.go:117] "RemoveContainer" containerID="9d29999d337e1f7b64bd00f063a2153b2b633c10157b9e893fbb285c63b86ee7" Dec 02 10:17:22 crc kubenswrapper[4711]: E1202 10:17:22.916168 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d29999d337e1f7b64bd00f063a2153b2b633c10157b9e893fbb285c63b86ee7\": container with ID starting with 9d29999d337e1f7b64bd00f063a2153b2b633c10157b9e893fbb285c63b86ee7 not found: ID does not exist" containerID="9d29999d337e1f7b64bd00f063a2153b2b633c10157b9e893fbb285c63b86ee7" Dec 02 10:17:22 crc kubenswrapper[4711]: I1202 10:17:22.916296 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d29999d337e1f7b64bd00f063a2153b2b633c10157b9e893fbb285c63b86ee7"} err="failed to get container status \"9d29999d337e1f7b64bd00f063a2153b2b633c10157b9e893fbb285c63b86ee7\": rpc error: code = NotFound desc = could not find container \"9d29999d337e1f7b64bd00f063a2153b2b633c10157b9e893fbb285c63b86ee7\": container with ID starting with 9d29999d337e1f7b64bd00f063a2153b2b633c10157b9e893fbb285c63b86ee7 not found: ID does not exist" Dec 02 10:17:23 crc kubenswrapper[4711]: I1202 10:17:23.087648 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38739913-4803-444c-b624-013235f6eec3" path="/var/lib/kubelet/pods/38739913-4803-444c-b624-013235f6eec3/volumes" Dec 02 10:17:26 crc kubenswrapper[4711]: I1202 10:17:26.184147 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6sr4n"] Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.215139 4711 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217010 4711 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217363 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226fe786-af06-4294-a1e5-7e1c4aa86551" containerName="registry-server" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217397 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="226fe786-af06-4294-a1e5-7e1c4aa86551" containerName="registry-server" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217415 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" containerName="extract-content" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217423 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" containerName="extract-content" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217440 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38739913-4803-444c-b624-013235f6eec3" containerName="extract-utilities" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217451 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="38739913-4803-444c-b624-013235f6eec3" containerName="extract-utilities" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217459 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41802e85-ca3f-4296-85ce-84bc4f4169d0" containerName="extract-content" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217466 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="41802e85-ca3f-4296-85ce-84bc4f4169d0" containerName="extract-content" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217480 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38739913-4803-444c-b624-013235f6eec3" containerName="registry-server" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217487 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="38739913-4803-444c-b624-013235f6eec3" containerName="registry-server" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217498 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41802e85-ca3f-4296-85ce-84bc4f4169d0" containerName="extract-utilities" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217506 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="41802e85-ca3f-4296-85ce-84bc4f4169d0" containerName="extract-utilities" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217518 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226fe786-af06-4294-a1e5-7e1c4aa86551" containerName="extract-content" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217529 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="226fe786-af06-4294-a1e5-7e1c4aa86551" containerName="extract-content" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217541 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226fe786-af06-4294-a1e5-7e1c4aa86551" containerName="extract-utilities" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217549 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="226fe786-af06-4294-a1e5-7e1c4aa86551" containerName="extract-utilities" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217559 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38739913-4803-444c-b624-013235f6eec3" containerName="extract-content" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217566 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="38739913-4803-444c-b624-013235f6eec3" containerName="extract-content" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217576 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" containerName="registry-server" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217584 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" containerName="registry-server" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217594 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c27c88-5a07-42e6-884a-de1988462f6e" containerName="pruner" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217606 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c27c88-5a07-42e6-884a-de1988462f6e" containerName="pruner" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217615 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41802e85-ca3f-4296-85ce-84bc4f4169d0" containerName="registry-server" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217622 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="41802e85-ca3f-4296-85ce-84bc4f4169d0" containerName="registry-server" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.217637 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" containerName="extract-utilities" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217644 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" containerName="extract-utilities" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217757 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb9e8eb0-d3f7-4354-9e11-cd3f839a07cb" containerName="registry-server" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217782 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="41802e85-ca3f-4296-85ce-84bc4f4169d0" containerName="registry-server" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217795 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="38739913-4803-444c-b624-013235f6eec3" containerName="registry-server" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217808 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="226fe786-af06-4294-a1e5-7e1c4aa86551" containerName="registry-server" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.217819 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c27c88-5a07-42e6-884a-de1988462f6e" containerName="pruner" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.218299 4711 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.218414 4711 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.218467 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.218564 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6" gracePeriod=15 Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.218580 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf" gracePeriod=15 Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.218619 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e" gracePeriod=15 Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.218631 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719" gracePeriod=15 Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.218626 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e" gracePeriod=15 Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.218851 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.218867 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.218878 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.218884 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.218894 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.218899 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.218908 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.218913 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.220763 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.220784 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.220798 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.220805 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.220813 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.220821 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.220999 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.221015 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.221023 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.221031 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.221038 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.221201 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.229853 4711 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.258633 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.322445 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.322547 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.322579 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.322609 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.322633 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.322756 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.322799 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.322863 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424137 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424466 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424489 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424506 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424523 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424316 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424554 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424571 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424588 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424604 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424605 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424606 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424630 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424553 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424729 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.424808 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.555042 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:17:29 crc kubenswrapper[4711]: E1202 10:17:29.573474 4711 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.249:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d5e9b2d5551f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 10:17:29.572610549 +0000 UTC m=+239.281976996,LastTimestamp:2025-12-02 10:17:29.572610549 +0000 UTC m=+239.281976996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.907416 4711 generic.go:334] "Generic (PLEG): container finished" podID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" containerID="1d677cc14ae50a1afcfca157402137dd5386f54bf5bfc9dac5f8fdaf81e802b9" exitCode=0 Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.907485 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"790393ef-9c83-4c71-a3cb-07d35a1e5f55","Type":"ContainerDied","Data":"1d677cc14ae50a1afcfca157402137dd5386f54bf5bfc9dac5f8fdaf81e802b9"} Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.908369 4711 status_manager.go:851] "Failed to get status for pod" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.908725 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"be6486beb4af4cc3bb54ea9842b17a24d03cb1c93617e483618c8be20184f253"} Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.908752 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5aee77ddcd7c9a0778d66edc2713a847786887f2f193709a5d2e178df1b656fd"} Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.908791 4711 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.909100 4711 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.909318 4711 status_manager.go:851] "Failed to get status for pod" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.910516 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.911653 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.912354 4711 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf" exitCode=0 Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.912369 4711 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719" exitCode=0 Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.912378 4711 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e" exitCode=0 Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.912386 4711 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e" exitCode=2 Dec 02 10:17:29 crc kubenswrapper[4711]: I1202 10:17:29.912409 4711 scope.go:117] "RemoveContainer" containerID="e9fe0bb90a0483e7c5bacbe539229061eebabfea9bf3bc6dd4b098b741adeafe" Dec 02 10:17:30 crc kubenswrapper[4711]: E1202 10:17:30.269415 4711 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:30 crc kubenswrapper[4711]: E1202 10:17:30.270289 4711 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:30 crc kubenswrapper[4711]: E1202 10:17:30.270914 4711 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:30 crc kubenswrapper[4711]: E1202 10:17:30.271258 4711 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:30 crc kubenswrapper[4711]: E1202 10:17:30.271538 4711 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:30 crc kubenswrapper[4711]: I1202 10:17:30.271589 4711 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 02 10:17:30 crc kubenswrapper[4711]: E1202 10:17:30.271879 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="200ms" Dec 02 10:17:30 crc kubenswrapper[4711]: E1202 10:17:30.473321 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="400ms" Dec 02 10:17:30 crc kubenswrapper[4711]: E1202 10:17:30.874308 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="800ms" Dec 02 10:17:30 crc kubenswrapper[4711]: I1202 10:17:30.921930 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.081291 4711 status_manager.go:851] "Failed to get status for pod" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.081812 4711 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.173176 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.174180 4711 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.174629 4711 status_manager.go:851] "Failed to get status for pod" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.247574 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/790393ef-9c83-4c71-a3cb-07d35a1e5f55-kube-api-access\") pod \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\" (UID: \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\") " Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.247628 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/790393ef-9c83-4c71-a3cb-07d35a1e5f55-kubelet-dir\") pod \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\" (UID: \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\") " Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.247726 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/790393ef-9c83-4c71-a3cb-07d35a1e5f55-var-lock\") pod \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\" (UID: \"790393ef-9c83-4c71-a3cb-07d35a1e5f55\") " Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.247824 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/790393ef-9c83-4c71-a3cb-07d35a1e5f55-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "790393ef-9c83-4c71-a3cb-07d35a1e5f55" (UID: "790393ef-9c83-4c71-a3cb-07d35a1e5f55"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.247876 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/790393ef-9c83-4c71-a3cb-07d35a1e5f55-var-lock" (OuterVolumeSpecName: "var-lock") pod "790393ef-9c83-4c71-a3cb-07d35a1e5f55" (UID: "790393ef-9c83-4c71-a3cb-07d35a1e5f55"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.248172 4711 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/790393ef-9c83-4c71-a3cb-07d35a1e5f55-var-lock\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.248188 4711 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/790393ef-9c83-4c71-a3cb-07d35a1e5f55-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.253332 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790393ef-9c83-4c71-a3cb-07d35a1e5f55-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "790393ef-9c83-4c71-a3cb-07d35a1e5f55" (UID: "790393ef-9c83-4c71-a3cb-07d35a1e5f55"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.348819 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/790393ef-9c83-4c71-a3cb-07d35a1e5f55-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.597725 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.598624 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.599320 4711 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.599639 4711 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.599838 4711 status_manager.go:851] "Failed to get status for pod" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: E1202 10:17:31.675689 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="1.6s" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.752331 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.752438 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.752469 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.752728 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.752768 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.752885 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.854393 4711 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.855044 4711 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.855100 4711 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.931159 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.931896 4711 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6" exitCode=0 Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.931985 4711 scope.go:117] "RemoveContainer" containerID="ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.932001 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.933855 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"790393ef-9c83-4c71-a3cb-07d35a1e5f55","Type":"ContainerDied","Data":"efe6fd2933cd87d4d7010dd7f0395dea072718900ca1baeac2eadb396a3a7a4f"} Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.933901 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efe6fd2933cd87d4d7010dd7f0395dea072718900ca1baeac2eadb396a3a7a4f" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.934121 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.945639 4711 scope.go:117] "RemoveContainer" containerID="ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.951010 4711 status_manager.go:851] "Failed to get status for pod" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.951487 4711 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.951681 4711 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.951853 4711 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.952058 4711 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.952222 4711 status_manager.go:851] "Failed to get status for pod" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.958113 4711 scope.go:117] "RemoveContainer" containerID="0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.969435 4711 scope.go:117] "RemoveContainer" containerID="70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.980595 4711 scope.go:117] "RemoveContainer" containerID="2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6" Dec 02 10:17:31 crc kubenswrapper[4711]: I1202 10:17:31.993931 4711 scope.go:117] "RemoveContainer" containerID="60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab" Dec 02 10:17:32 crc kubenswrapper[4711]: I1202 10:17:32.010891 4711 scope.go:117] "RemoveContainer" containerID="ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf" Dec 02 10:17:32 crc kubenswrapper[4711]: E1202 10:17:32.011398 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\": container with ID starting with ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf not found: ID does not exist" containerID="ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf" Dec 02 10:17:32 crc kubenswrapper[4711]: I1202 10:17:32.011455 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf"} err="failed to get container status \"ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\": rpc error: code = NotFound desc = could not find container \"ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf\": container with ID starting with ed72ff7c566e085196cb95249258bc5412357b2ab9b582d80d67318ad86679bf not found: ID does not exist" Dec 02 10:17:32 crc kubenswrapper[4711]: I1202 10:17:32.011486 4711 scope.go:117] "RemoveContainer" containerID="ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719" Dec 02 10:17:32 crc kubenswrapper[4711]: E1202 10:17:32.011775 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\": container with ID starting with ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719 not found: ID does not exist" containerID="ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719" Dec 02 10:17:32 crc kubenswrapper[4711]: I1202 10:17:32.011822 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719"} err="failed to get container status \"ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\": rpc error: code = NotFound desc = could not find container \"ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719\": container with ID starting with ab71e474f32c6ec7852ee07db445a92000a71a77841cde9d0abf2dd3bccb0719 not found: ID does not exist" Dec 02 10:17:32 crc kubenswrapper[4711]: I1202 10:17:32.011861 4711 scope.go:117] "RemoveContainer" containerID="0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e" Dec 02 10:17:32 crc kubenswrapper[4711]: E1202 10:17:32.012164 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\": container with ID starting with 0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e not found: ID does not exist" containerID="0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e" Dec 02 10:17:32 crc kubenswrapper[4711]: I1202 10:17:32.012189 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e"} err="failed to get container status \"0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\": rpc error: code = NotFound desc = could not find container \"0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e\": container with ID starting with 0ffd98a21d13862bff06e60c0d7fd20130681c1ecf1282f6de7260baab84ae2e not found: ID does not exist" Dec 02 10:17:32 crc kubenswrapper[4711]: I1202 10:17:32.012203 4711 scope.go:117] "RemoveContainer" containerID="70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e" Dec 02 10:17:32 crc kubenswrapper[4711]: E1202 10:17:32.012559 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\": container with ID starting with 70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e not found: ID does not exist" containerID="70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e" Dec 02 10:17:32 crc kubenswrapper[4711]: I1202 10:17:32.012620 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e"} err="failed to get container status \"70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\": rpc error: code = NotFound desc = could not find container \"70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e\": container with ID starting with 70603f49f94abc0a0cf4c6503f72ba4c50bc387265e2496c036c0bcef8b4113e not found: ID does not exist" Dec 02 10:17:32 crc kubenswrapper[4711]: I1202 10:17:32.012661 4711 scope.go:117] "RemoveContainer" containerID="2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6" Dec 02 10:17:32 crc kubenswrapper[4711]: E1202 10:17:32.013073 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\": container with ID starting with 2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6 not found: ID does not exist" containerID="2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6" Dec 02 10:17:32 crc kubenswrapper[4711]: I1202 10:17:32.013110 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6"} err="failed to get container status \"2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\": rpc error: code = NotFound desc = could not find container \"2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6\": container with ID starting with 2425e26cc30352e658979599b07be798e4e9d94f4d340e2ebe133ad6407d1ee6 not found: ID does not exist" Dec 02 10:17:32 crc kubenswrapper[4711]: I1202 10:17:32.013130 4711 scope.go:117] "RemoveContainer" containerID="60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab" Dec 02 10:17:32 crc kubenswrapper[4711]: E1202 10:17:32.013399 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\": container with ID starting with 60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab not found: ID does not exist" containerID="60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab" Dec 02 10:17:32 crc kubenswrapper[4711]: I1202 10:17:32.013435 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab"} err="failed to get container status \"60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\": rpc error: code = NotFound desc = could not find container \"60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab\": container with ID starting with 60175fc09a1af44c09062c648788dac49f5f1e6c54794798947dc32e427845ab not found: ID does not exist" Dec 02 10:17:33 crc kubenswrapper[4711]: I1202 10:17:33.087403 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 02 10:17:33 crc kubenswrapper[4711]: E1202 10:17:33.276864 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="3.2s" Dec 02 10:17:36 crc kubenswrapper[4711]: E1202 10:17:36.477876 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="6.4s" Dec 02 10:17:39 crc kubenswrapper[4711]: E1202 10:17:39.422585 4711 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.249:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d5e9b2d5551f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 10:17:29.572610549 +0000 UTC m=+239.281976996,LastTimestamp:2025-12-02 10:17:29.572610549 +0000 UTC m=+239.281976996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 10:17:41 crc kubenswrapper[4711]: I1202 10:17:41.085146 4711 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:41 crc kubenswrapper[4711]: I1202 10:17:41.085646 4711 status_manager.go:851] "Failed to get status for pod" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:42 crc kubenswrapper[4711]: E1202 10:17:42.879762 4711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="7s" Dec 02 10:17:43 crc kubenswrapper[4711]: I1202 10:17:43.003903 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 10:17:43 crc kubenswrapper[4711]: I1202 10:17:43.003998 4711 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4" exitCode=1 Dec 02 10:17:43 crc kubenswrapper[4711]: I1202 10:17:43.004033 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4"} Dec 02 10:17:43 crc kubenswrapper[4711]: I1202 10:17:43.004494 4711 scope.go:117] "RemoveContainer" containerID="c1e5bec81096cdb204e54b867928e9ed90363b4b03605b327ee3bfef7733bed4" Dec 02 10:17:43 crc kubenswrapper[4711]: I1202 10:17:43.005022 4711 status_manager.go:851] "Failed to get status for pod" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:43 crc kubenswrapper[4711]: I1202 10:17:43.005893 4711 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:43 crc kubenswrapper[4711]: I1202 10:17:43.006686 4711 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:43 crc kubenswrapper[4711]: I1202 10:17:43.218829 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:17:44 crc kubenswrapper[4711]: I1202 10:17:44.015441 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 10:17:44 crc kubenswrapper[4711]: I1202 10:17:44.015879 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ff1a5f22fec50a78294f7578baadbc8f5982ae9d3f7429c08583072c37312d04"} Dec 02 10:17:44 crc kubenswrapper[4711]: I1202 10:17:44.017014 4711 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:44 crc kubenswrapper[4711]: I1202 10:17:44.017511 4711 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:44 crc kubenswrapper[4711]: I1202 10:17:44.018169 4711 status_manager.go:851] "Failed to get status for pod" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:44 crc kubenswrapper[4711]: I1202 10:17:44.078373 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:44 crc kubenswrapper[4711]: I1202 10:17:44.079749 4711 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:44 crc kubenswrapper[4711]: I1202 10:17:44.081115 4711 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:44 crc kubenswrapper[4711]: I1202 10:17:44.082128 4711 status_manager.go:851] "Failed to get status for pod" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:44 crc kubenswrapper[4711]: I1202 10:17:44.094058 4711 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:17:44 crc kubenswrapper[4711]: I1202 10:17:44.094119 4711 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:17:44 crc kubenswrapper[4711]: E1202 10:17:44.094620 4711 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:44 crc kubenswrapper[4711]: I1202 10:17:44.095081 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:44 crc kubenswrapper[4711]: W1202 10:17:44.120043 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-6f63a07d08cdeac7c760dcd175721892bdb6fb3259844c6419d637f4114f9c12 WatchSource:0}: Error finding container 6f63a07d08cdeac7c760dcd175721892bdb6fb3259844c6419d637f4114f9c12: Status 404 returned error can't find the container with id 6f63a07d08cdeac7c760dcd175721892bdb6fb3259844c6419d637f4114f9c12 Dec 02 10:17:45 crc kubenswrapper[4711]: I1202 10:17:45.024059 4711 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2d94b6f24dd0299d68a10fc38233b018beb2e5987bb1561599262e83d64bc131" exitCode=0 Dec 02 10:17:45 crc kubenswrapper[4711]: I1202 10:17:45.024121 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2d94b6f24dd0299d68a10fc38233b018beb2e5987bb1561599262e83d64bc131"} Dec 02 10:17:45 crc kubenswrapper[4711]: I1202 10:17:45.025260 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6f63a07d08cdeac7c760dcd175721892bdb6fb3259844c6419d637f4114f9c12"} Dec 02 10:17:45 crc kubenswrapper[4711]: I1202 10:17:45.025845 4711 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:17:45 crc kubenswrapper[4711]: I1202 10:17:45.025882 4711 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:17:45 crc kubenswrapper[4711]: E1202 10:17:45.026270 4711 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:45 crc kubenswrapper[4711]: I1202 10:17:45.026477 4711 status_manager.go:851] "Failed to get status for pod" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:45 crc kubenswrapper[4711]: I1202 10:17:45.027450 4711 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:45 crc kubenswrapper[4711]: I1202 10:17:45.027899 4711 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Dec 02 10:17:46 crc kubenswrapper[4711]: I1202 10:17:46.044244 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"73cd6747ac5c321a30207fb8e2e94e11459ac52a4c38f6497ef6a863caa4d4c9"} Dec 02 10:17:46 crc kubenswrapper[4711]: I1202 10:17:46.044615 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"64a6256d7c324af35c17c34d6ff08b47661f454482bf20b63207de361aed4c50"} Dec 02 10:17:46 crc kubenswrapper[4711]: I1202 10:17:46.044631 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fd4f05ad2e167bc5756d889c947a0a1f0b3fe24ab79569f8d5de2d7d7fc4d1d5"} Dec 02 10:17:46 crc kubenswrapper[4711]: I1202 10:17:46.044643 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"49a27c2cf407f5d421c1175ea895be32852b4d73f499b222d9d09cfdbd4aa780"} Dec 02 10:17:47 crc kubenswrapper[4711]: I1202 10:17:47.052831 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"234fbe770b44e7ed4a093371c953d5026a66a95be5bcd93b8b7073cd9544489f"} Dec 02 10:17:47 crc kubenswrapper[4711]: I1202 10:17:47.053190 4711 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:17:47 crc kubenswrapper[4711]: I1202 10:17:47.053217 4711 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:17:47 crc kubenswrapper[4711]: I1202 10:17:47.222286 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:17:47 crc kubenswrapper[4711]: I1202 10:17:47.227128 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:17:48 crc kubenswrapper[4711]: I1202 10:17:48.058647 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:17:49 crc kubenswrapper[4711]: I1202 10:17:49.095681 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:49 crc kubenswrapper[4711]: I1202 10:17:49.095746 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:49 crc kubenswrapper[4711]: I1202 10:17:49.101705 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.208096 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" podUID="0e0f1361-ab19-4762-9e5d-69d42bef5fb0" containerName="oauth-openshift" containerID="cri-o://20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963" gracePeriod=15 Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.552436 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701405 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-error\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701459 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-session\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701484 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-ocp-branding-template\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701508 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-service-ca\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701530 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-login\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701595 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-trusted-ca-bundle\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701616 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mf2k\" (UniqueName: \"kubernetes.io/projected/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-kube-api-access-8mf2k\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701666 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-idp-0-file-data\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701700 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-audit-policies\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701729 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-serving-cert\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701746 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-router-certs\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701764 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-provider-selection\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701784 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-audit-dir\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.701815 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-cliconfig\") pod \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\" (UID: \"0e0f1361-ab19-4762-9e5d-69d42bef5fb0\") " Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.702698 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.702745 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.702764 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.702810 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.703339 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.709941 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.710126 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.710439 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.710517 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-kube-api-access-8mf2k" (OuterVolumeSpecName: "kube-api-access-8mf2k") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "kube-api-access-8mf2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.711308 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.711739 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.712495 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.712658 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.729996 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0e0f1361-ab19-4762-9e5d-69d42bef5fb0" (UID: "0e0f1361-ab19-4762-9e5d-69d42bef5fb0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803598 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803645 4711 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803662 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803677 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803691 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803704 4711 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803717 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803729 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803740 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803755 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803766 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803778 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803790 4711 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:51 crc kubenswrapper[4711]: I1202 10:17:51.803802 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mf2k\" (UniqueName: \"kubernetes.io/projected/0e0f1361-ab19-4762-9e5d-69d42bef5fb0-kube-api-access-8mf2k\") on node \"crc\" DevicePath \"\"" Dec 02 10:17:52 crc kubenswrapper[4711]: I1202 10:17:52.063803 4711 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:52 crc kubenswrapper[4711]: I1202 10:17:52.077279 4711 generic.go:334] "Generic (PLEG): container finished" podID="0e0f1361-ab19-4762-9e5d-69d42bef5fb0" containerID="20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963" exitCode=0 Dec 02 10:17:52 crc kubenswrapper[4711]: I1202 10:17:52.077340 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" event={"ID":"0e0f1361-ab19-4762-9e5d-69d42bef5fb0","Type":"ContainerDied","Data":"20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963"} Dec 02 10:17:52 crc kubenswrapper[4711]: I1202 10:17:52.077383 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" event={"ID":"0e0f1361-ab19-4762-9e5d-69d42bef5fb0","Type":"ContainerDied","Data":"ef929972da2ad1d3e3dab225a174b1a0e14aa82051c99e24eed326eeebad5804"} Dec 02 10:17:52 crc kubenswrapper[4711]: I1202 10:17:52.077421 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6sr4n" Dec 02 10:17:52 crc kubenswrapper[4711]: I1202 10:17:52.077457 4711 scope.go:117] "RemoveContainer" containerID="20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963" Dec 02 10:17:52 crc kubenswrapper[4711]: I1202 10:17:52.102710 4711 scope.go:117] "RemoveContainer" containerID="20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963" Dec 02 10:17:52 crc kubenswrapper[4711]: E1202 10:17:52.103782 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963\": container with ID starting with 20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963 not found: ID does not exist" containerID="20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963" Dec 02 10:17:52 crc kubenswrapper[4711]: I1202 10:17:52.103839 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963"} err="failed to get container status \"20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963\": rpc error: code = NotFound desc = could not find container \"20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963\": container with ID starting with 20bcd6132d2f23b6b514970389c51eea4c87e6a93f2a353b33adf79aa742b963 not found: ID does not exist" Dec 02 10:17:52 crc kubenswrapper[4711]: I1202 10:17:52.196535 4711 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df43a529-4592-4a0c-b43a-7575f120553a" Dec 02 10:17:53 crc kubenswrapper[4711]: I1202 10:17:53.083518 4711 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:17:53 crc kubenswrapper[4711]: I1202 10:17:53.084055 4711 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:17:53 crc kubenswrapper[4711]: I1202 10:17:53.085913 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:53 crc kubenswrapper[4711]: I1202 10:17:53.086750 4711 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df43a529-4592-4a0c-b43a-7575f120553a" Dec 02 10:17:53 crc kubenswrapper[4711]: I1202 10:17:53.089050 4711 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://49a27c2cf407f5d421c1175ea895be32852b4d73f499b222d9d09cfdbd4aa780" Dec 02 10:17:53 crc kubenswrapper[4711]: I1202 10:17:53.089089 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:17:54 crc kubenswrapper[4711]: I1202 10:17:54.090322 4711 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:17:54 crc kubenswrapper[4711]: I1202 10:17:54.090378 4711 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:17:54 crc kubenswrapper[4711]: I1202 10:17:54.093004 4711 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df43a529-4592-4a0c-b43a-7575f120553a" Dec 02 10:17:55 crc kubenswrapper[4711]: I1202 10:17:55.097045 4711 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:17:55 crc kubenswrapper[4711]: I1202 10:17:55.097112 4711 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:17:55 crc kubenswrapper[4711]: I1202 10:17:55.100828 4711 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df43a529-4592-4a0c-b43a-7575f120553a" Dec 02 10:18:01 crc kubenswrapper[4711]: I1202 10:18:01.501503 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 02 10:18:01 crc kubenswrapper[4711]: I1202 10:18:01.649072 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 02 10:18:01 crc kubenswrapper[4711]: I1202 10:18:01.653769 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:18:02 crc kubenswrapper[4711]: I1202 10:18:02.126909 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 02 10:18:02 crc kubenswrapper[4711]: I1202 10:18:02.234926 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 02 10:18:02 crc kubenswrapper[4711]: I1202 10:18:02.269346 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 02 10:18:02 crc kubenswrapper[4711]: I1202 10:18:02.401011 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 02 10:18:02 crc kubenswrapper[4711]: I1202 10:18:02.687556 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 02 10:18:02 crc kubenswrapper[4711]: I1202 10:18:02.718399 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 02 10:18:02 crc kubenswrapper[4711]: I1202 10:18:02.922551 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 02 10:18:03 crc kubenswrapper[4711]: I1202 10:18:03.253499 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 02 10:18:03 crc kubenswrapper[4711]: I1202 10:18:03.418721 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 10:18:03 crc kubenswrapper[4711]: I1202 10:18:03.541781 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 02 10:18:03 crc kubenswrapper[4711]: I1202 10:18:03.643566 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 02 10:18:03 crc kubenswrapper[4711]: I1202 10:18:03.841114 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 02 10:18:03 crc kubenswrapper[4711]: I1202 10:18:03.937413 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.045865 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.047198 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.055865 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.320837 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.412994 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.465539 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.554455 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.650476 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.691925 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.895360 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.918693 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.946326 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 02 10:18:04 crc kubenswrapper[4711]: I1202 10:18:04.969457 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.068535 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.124041 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.134371 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.160845 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.168912 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.197239 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.219159 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.219685 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.353317 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.430587 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.494078 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.506239 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.557918 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.590396 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.593883 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.610900 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.810437 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.819937 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.850164 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.860021 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 02 10:18:05 crc kubenswrapper[4711]: I1202 10:18:05.889768 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 02 10:18:06 crc kubenswrapper[4711]: I1202 10:18:06.056056 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 02 10:18:06 crc kubenswrapper[4711]: I1202 10:18:06.230690 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 02 10:18:06 crc kubenswrapper[4711]: I1202 10:18:06.234087 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 02 10:18:06 crc kubenswrapper[4711]: I1202 10:18:06.275017 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 02 10:18:06 crc kubenswrapper[4711]: I1202 10:18:06.397751 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 02 10:18:06 crc kubenswrapper[4711]: I1202 10:18:06.417459 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 02 10:18:06 crc kubenswrapper[4711]: I1202 10:18:06.542112 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 02 10:18:06 crc kubenswrapper[4711]: I1202 10:18:06.613023 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 02 10:18:06 crc kubenswrapper[4711]: I1202 10:18:06.656738 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 02 10:18:06 crc kubenswrapper[4711]: I1202 10:18:06.972383 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.246803 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.274365 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.320816 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.323426 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.441469 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.450629 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.470925 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.565367 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.617848 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.657849 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.692623 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.693081 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.746762 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.777302 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 02 10:18:07 crc kubenswrapper[4711]: I1202 10:18:07.927788 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.017066 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.079069 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.085927 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.108118 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.164549 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.175165 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.341147 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.366114 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.435839 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.444313 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.500816 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.530923 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.584926 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.633405 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.643219 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.677022 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.748504 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.813201 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.879627 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 02 10:18:08 crc kubenswrapper[4711]: I1202 10:18:08.988526 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.112795 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.124018 4711 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.136367 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.230310 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.273636 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.322707 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.380121 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.403988 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.424403 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.427340 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.442852 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.514976 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.532610 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.559529 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.580626 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.693258 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.877607 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.895222 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.941407 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.956803 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.968966 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 02 10:18:09 crc kubenswrapper[4711]: I1202 10:18:09.983156 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.031609 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.097259 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.098847 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.259578 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.264194 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.266306 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.331180 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.383438 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.390278 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.468225 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.512571 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.577146 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.624919 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.628019 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.633102 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.644436 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.677630 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.749401 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.853353 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.869895 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.907368 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.912566 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 10:18:10 crc kubenswrapper[4711]: I1202 10:18:10.943761 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.003506 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.007542 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.008682 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.106520 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.170190 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.170342 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.179558 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.222859 4711 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.392287 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.396868 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.418422 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.433064 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.487626 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.564802 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.622237 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.636456 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.704818 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.706616 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.708443 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.743398 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.788091 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 02 10:18:11 crc kubenswrapper[4711]: I1202 10:18:11.916055 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 02 10:18:12 crc kubenswrapper[4711]: I1202 10:18:12.089866 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 02 10:18:12 crc kubenswrapper[4711]: I1202 10:18:12.102260 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 02 10:18:12 crc kubenswrapper[4711]: I1202 10:18:12.103775 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 02 10:18:12 crc kubenswrapper[4711]: I1202 10:18:12.238376 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 02 10:18:12 crc kubenswrapper[4711]: I1202 10:18:12.243609 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 02 10:18:12 crc kubenswrapper[4711]: I1202 10:18:12.416545 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 02 10:18:12 crc kubenswrapper[4711]: I1202 10:18:12.689154 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 02 10:18:12 crc kubenswrapper[4711]: I1202 10:18:12.786068 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 02 10:18:12 crc kubenswrapper[4711]: I1202 10:18:12.842726 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 02 10:18:12 crc kubenswrapper[4711]: I1202 10:18:12.969396 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.007185 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.038819 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.092104 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.308178 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.369236 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.558738 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.568657 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.670750 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.774630 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.836102 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.896413 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.915081 4711 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 02 10:18:13 crc kubenswrapper[4711]: I1202 10:18:13.997255 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.008249 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.049824 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.115316 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.127730 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.147272 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.161039 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.231452 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.252898 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.256413 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.310366 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.373107 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.382552 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.468263 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.533140 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.729365 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.926248 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.934003 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.966644 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 02 10:18:14 crc kubenswrapper[4711]: I1202 10:18:14.996040 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.098365 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.103771 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.224584 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.253544 4711 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.254820 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.260945 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=46.260859407 podStartE2EDuration="46.260859407s" podCreationTimestamp="2025-12-02 10:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:17:52.114808997 +0000 UTC m=+261.824175454" watchObservedRunningTime="2025-12-02 10:18:15.260859407 +0000 UTC m=+284.970225894" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.261444 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-6sr4n"] Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.261510 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-6f96647944-qlvvj"] Dec 02 10:18:15 crc kubenswrapper[4711]: E1202 10:18:15.261837 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0f1361-ab19-4762-9e5d-69d42bef5fb0" containerName="oauth-openshift" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.261875 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0f1361-ab19-4762-9e5d-69d42bef5fb0" containerName="oauth-openshift" Dec 02 10:18:15 crc kubenswrapper[4711]: E1202 10:18:15.261917 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" containerName="installer" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.261929 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" containerName="installer" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.262078 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e0f1361-ab19-4762-9e5d-69d42bef5fb0" containerName="oauth-openshift" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.262100 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="790393ef-9c83-4c71-a3cb-07d35a1e5f55" containerName="installer" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.262302 4711 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.262395 4711 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00bd7360-ad0d-4725-84e3-28c7ba7e3695" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.262871 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.266367 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.266695 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.267085 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.267277 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.267361 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.269520 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.269725 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.269727 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.270093 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.270720 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.270832 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.276506 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.277354 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.278019 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.279729 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.285673 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317247 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317316 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317350 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/647eee59-315b-474b-8efe-2eef9d56927f-audit-policies\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317374 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/647eee59-315b-474b-8efe-2eef9d56927f-audit-dir\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317412 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-session\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317443 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317526 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317594 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx4tf\" (UniqueName: \"kubernetes.io/projected/647eee59-315b-474b-8efe-2eef9d56927f-kube-api-access-nx4tf\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317651 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317682 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317897 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317921 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.317947 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-user-template-error\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.318043 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-user-template-login\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.318979 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.318944923 podStartE2EDuration="23.318944923s" podCreationTimestamp="2025-12-02 10:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:18:15.309059357 +0000 UTC m=+285.018425824" watchObservedRunningTime="2025-12-02 10:18:15.318944923 +0000 UTC m=+285.028311390" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.390554 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.406325 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.409355 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419125 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-user-template-login\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419165 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419198 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419221 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/647eee59-315b-474b-8efe-2eef9d56927f-audit-policies\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419241 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/647eee59-315b-474b-8efe-2eef9d56927f-audit-dir\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419256 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-session\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419275 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419302 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419326 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx4tf\" (UniqueName: \"kubernetes.io/projected/647eee59-315b-474b-8efe-2eef9d56927f-kube-api-access-nx4tf\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419377 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419401 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419420 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419443 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.419458 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-user-template-error\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.420151 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/647eee59-315b-474b-8efe-2eef9d56927f-audit-dir\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.420506 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.421147 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/647eee59-315b-474b-8efe-2eef9d56927f-audit-policies\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.421381 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.422095 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.426572 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-session\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.426586 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.427132 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.427211 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.427752 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.428264 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-user-template-error\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.428535 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.430815 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-user-template-login\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.435033 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/647eee59-315b-474b-8efe-2eef9d56927f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.439440 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx4tf\" (UniqueName: \"kubernetes.io/projected/647eee59-315b-474b-8efe-2eef9d56927f-kube-api-access-nx4tf\") pod \"oauth-openshift-6f96647944-qlvvj\" (UID: \"647eee59-315b-474b-8efe-2eef9d56927f\") " pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.582134 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.623787 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.781947 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.795394 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.827893 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 02 10:18:15 crc kubenswrapper[4711]: I1202 10:18:15.875822 4711 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 02 10:18:16 crc kubenswrapper[4711]: I1202 10:18:16.002161 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f96647944-qlvvj"] Dec 02 10:18:16 crc kubenswrapper[4711]: I1202 10:18:16.062351 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 02 10:18:16 crc kubenswrapper[4711]: I1202 10:18:16.160564 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 02 10:18:16 crc kubenswrapper[4711]: I1202 10:18:16.223848 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" event={"ID":"647eee59-315b-474b-8efe-2eef9d56927f","Type":"ContainerStarted","Data":"506ad8f1e8c9827397228bfd506a2c08ecf011536c685190f2bd2a7914fbb77f"} Dec 02 10:18:16 crc kubenswrapper[4711]: I1202 10:18:16.646905 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 02 10:18:16 crc kubenswrapper[4711]: I1202 10:18:16.766465 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.049807 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.086564 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.087820 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0f1361-ab19-4762-9e5d-69d42bef5fb0" path="/var/lib/kubelet/pods/0e0f1361-ab19-4762-9e5d-69d42bef5fb0/volumes" Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.126802 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.230100 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" event={"ID":"647eee59-315b-474b-8efe-2eef9d56927f","Type":"ContainerStarted","Data":"4c0fc63d268e63e33e0a8318f87ae83f5b6646cf5f7fa6aa75b71b970ae18ca0"} Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.230460 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.236335 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.251675 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.256608 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f96647944-qlvvj" podStartSLOduration=51.256585117 podStartE2EDuration="51.256585117s" podCreationTimestamp="2025-12-02 10:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:18:17.25518987 +0000 UTC m=+286.964556327" watchObservedRunningTime="2025-12-02 10:18:17.256585117 +0000 UTC m=+286.965951574" Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.297921 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.698174 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.713600 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 10:18:17 crc kubenswrapper[4711]: I1202 10:18:17.821426 4711 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 02 10:18:19 crc kubenswrapper[4711]: I1202 10:18:19.119237 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 02 10:18:25 crc kubenswrapper[4711]: I1202 10:18:25.684003 4711 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 10:18:25 crc kubenswrapper[4711]: I1202 10:18:25.684899 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://be6486beb4af4cc3bb54ea9842b17a24d03cb1c93617e483618c8be20184f253" gracePeriod=5 Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.073525 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.269042 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.269110 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.326649 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.326714 4711 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="be6486beb4af4cc3bb54ea9842b17a24d03cb1c93617e483618c8be20184f253" exitCode=137 Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.326757 4711 scope.go:117] "RemoveContainer" containerID="be6486beb4af4cc3bb54ea9842b17a24d03cb1c93617e483618c8be20184f253" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.326856 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.340944 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.341001 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.341024 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.341049 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.341091 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.341137 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.341141 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.341194 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.341222 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.341440 4711 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.341470 4711 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.341488 4711 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.341506 4711 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.348858 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.356257 4711 scope.go:117] "RemoveContainer" containerID="be6486beb4af4cc3bb54ea9842b17a24d03cb1c93617e483618c8be20184f253" Dec 02 10:18:31 crc kubenswrapper[4711]: E1202 10:18:31.356671 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be6486beb4af4cc3bb54ea9842b17a24d03cb1c93617e483618c8be20184f253\": container with ID starting with be6486beb4af4cc3bb54ea9842b17a24d03cb1c93617e483618c8be20184f253 not found: ID does not exist" containerID="be6486beb4af4cc3bb54ea9842b17a24d03cb1c93617e483618c8be20184f253" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.356715 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be6486beb4af4cc3bb54ea9842b17a24d03cb1c93617e483618c8be20184f253"} err="failed to get container status \"be6486beb4af4cc3bb54ea9842b17a24d03cb1c93617e483618c8be20184f253\": rpc error: code = NotFound desc = could not find container \"be6486beb4af4cc3bb54ea9842b17a24d03cb1c93617e483618c8be20184f253\": container with ID starting with be6486beb4af4cc3bb54ea9842b17a24d03cb1c93617e483618c8be20184f253 not found: ID does not exist" Dec 02 10:18:31 crc kubenswrapper[4711]: I1202 10:18:31.442307 4711 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:33 crc kubenswrapper[4711]: I1202 10:18:33.086374 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 02 10:18:33 crc kubenswrapper[4711]: I1202 10:18:33.086918 4711 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 02 10:18:33 crc kubenswrapper[4711]: I1202 10:18:33.098163 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 10:18:33 crc kubenswrapper[4711]: I1202 10:18:33.098207 4711 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2b8df597-60c5-4d50-ad58-bd4af0d908e0" Dec 02 10:18:33 crc kubenswrapper[4711]: I1202 10:18:33.102503 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 10:18:33 crc kubenswrapper[4711]: I1202 10:18:33.102552 4711 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2b8df597-60c5-4d50-ad58-bd4af0d908e0" Dec 02 10:18:34 crc kubenswrapper[4711]: I1202 10:18:34.080196 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 02 10:18:34 crc kubenswrapper[4711]: I1202 10:18:34.255482 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 02 10:18:36 crc kubenswrapper[4711]: I1202 10:18:36.099042 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 10:18:37 crc kubenswrapper[4711]: I1202 10:18:37.567460 4711 generic.go:334] "Generic (PLEG): container finished" podID="8e8230b2-fb50-43c7-8a69-af1d02cce895" containerID="bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768" exitCode=0 Dec 02 10:18:37 crc kubenswrapper[4711]: I1202 10:18:37.567515 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" event={"ID":"8e8230b2-fb50-43c7-8a69-af1d02cce895","Type":"ContainerDied","Data":"bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768"} Dec 02 10:18:37 crc kubenswrapper[4711]: I1202 10:18:37.568053 4711 scope.go:117] "RemoveContainer" containerID="bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768" Dec 02 10:18:37 crc kubenswrapper[4711]: I1202 10:18:37.775011 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:18:37 crc kubenswrapper[4711]: I1202 10:18:37.775447 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:18:38 crc kubenswrapper[4711]: I1202 10:18:38.573890 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" event={"ID":"8e8230b2-fb50-43c7-8a69-af1d02cce895","Type":"ContainerStarted","Data":"d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142"} Dec 02 10:18:38 crc kubenswrapper[4711]: I1202 10:18:38.575130 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:18:38 crc kubenswrapper[4711]: I1202 10:18:38.580205 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.506130 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-49njp"] Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.507290 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" podUID="067e3491-3d3c-4bc6-a164-9093f895fbcf" containerName="controller-manager" containerID="cri-o://15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f" gracePeriod=30 Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.604813 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m"] Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.605171 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" podUID="53da86b0-43ce-4526-97db-a82df759ef58" containerName="route-controller-manager" containerID="cri-o://afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b" gracePeriod=30 Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.865115 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.934581 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-client-ca\") pod \"067e3491-3d3c-4bc6-a164-9093f895fbcf\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.934651 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-config\") pod \"067e3491-3d3c-4bc6-a164-9093f895fbcf\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.934676 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067e3491-3d3c-4bc6-a164-9093f895fbcf-serving-cert\") pod \"067e3491-3d3c-4bc6-a164-9093f895fbcf\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.934761 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxpdb\" (UniqueName: \"kubernetes.io/projected/067e3491-3d3c-4bc6-a164-9093f895fbcf-kube-api-access-vxpdb\") pod \"067e3491-3d3c-4bc6-a164-9093f895fbcf\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.934798 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-proxy-ca-bundles\") pod \"067e3491-3d3c-4bc6-a164-9093f895fbcf\" (UID: \"067e3491-3d3c-4bc6-a164-9093f895fbcf\") " Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.935919 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "067e3491-3d3c-4bc6-a164-9093f895fbcf" (UID: "067e3491-3d3c-4bc6-a164-9093f895fbcf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.935969 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-client-ca" (OuterVolumeSpecName: "client-ca") pod "067e3491-3d3c-4bc6-a164-9093f895fbcf" (UID: "067e3491-3d3c-4bc6-a164-9093f895fbcf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.936020 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-config" (OuterVolumeSpecName: "config") pod "067e3491-3d3c-4bc6-a164-9093f895fbcf" (UID: "067e3491-3d3c-4bc6-a164-9093f895fbcf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.936994 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.940292 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067e3491-3d3c-4bc6-a164-9093f895fbcf-kube-api-access-vxpdb" (OuterVolumeSpecName: "kube-api-access-vxpdb") pod "067e3491-3d3c-4bc6-a164-9093f895fbcf" (UID: "067e3491-3d3c-4bc6-a164-9093f895fbcf"). InnerVolumeSpecName "kube-api-access-vxpdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:18:40 crc kubenswrapper[4711]: I1202 10:18:40.940322 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067e3491-3d3c-4bc6-a164-9093f895fbcf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "067e3491-3d3c-4bc6-a164-9093f895fbcf" (UID: "067e3491-3d3c-4bc6-a164-9093f895fbcf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.036733 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53da86b0-43ce-4526-97db-a82df759ef58-config\") pod \"53da86b0-43ce-4526-97db-a82df759ef58\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.036831 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53da86b0-43ce-4526-97db-a82df759ef58-client-ca\") pod \"53da86b0-43ce-4526-97db-a82df759ef58\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.036884 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53da86b0-43ce-4526-97db-a82df759ef58-serving-cert\") pod \"53da86b0-43ce-4526-97db-a82df759ef58\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.037322 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b49lf\" (UniqueName: \"kubernetes.io/projected/53da86b0-43ce-4526-97db-a82df759ef58-kube-api-access-b49lf\") pod \"53da86b0-43ce-4526-97db-a82df759ef58\" (UID: \"53da86b0-43ce-4526-97db-a82df759ef58\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.037709 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53da86b0-43ce-4526-97db-a82df759ef58-client-ca" (OuterVolumeSpecName: "client-ca") pod "53da86b0-43ce-4526-97db-a82df759ef58" (UID: "53da86b0-43ce-4526-97db-a82df759ef58"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.037897 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53da86b0-43ce-4526-97db-a82df759ef58-config" (OuterVolumeSpecName: "config") pod "53da86b0-43ce-4526-97db-a82df759ef58" (UID: "53da86b0-43ce-4526-97db-a82df759ef58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.038306 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53da86b0-43ce-4526-97db-a82df759ef58-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.038325 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxpdb\" (UniqueName: \"kubernetes.io/projected/067e3491-3d3c-4bc6-a164-9093f895fbcf-kube-api-access-vxpdb\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.038338 4711 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.038346 4711 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53da86b0-43ce-4526-97db-a82df759ef58-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.038355 4711 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.038363 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067e3491-3d3c-4bc6-a164-9093f895fbcf-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.038371 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067e3491-3d3c-4bc6-a164-9093f895fbcf-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.040487 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53da86b0-43ce-4526-97db-a82df759ef58-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "53da86b0-43ce-4526-97db-a82df759ef58" (UID: "53da86b0-43ce-4526-97db-a82df759ef58"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.040560 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53da86b0-43ce-4526-97db-a82df759ef58-kube-api-access-b49lf" (OuterVolumeSpecName: "kube-api-access-b49lf") pod "53da86b0-43ce-4526-97db-a82df759ef58" (UID: "53da86b0-43ce-4526-97db-a82df759ef58"). InnerVolumeSpecName "kube-api-access-b49lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.126739 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6988db4df6-txndg"] Dec 02 10:18:41 crc kubenswrapper[4711]: E1202 10:18:41.126998 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.127015 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 10:18:41 crc kubenswrapper[4711]: E1202 10:18:41.127027 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53da86b0-43ce-4526-97db-a82df759ef58" containerName="route-controller-manager" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.127036 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="53da86b0-43ce-4526-97db-a82df759ef58" containerName="route-controller-manager" Dec 02 10:18:41 crc kubenswrapper[4711]: E1202 10:18:41.127053 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067e3491-3d3c-4bc6-a164-9093f895fbcf" containerName="controller-manager" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.127061 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="067e3491-3d3c-4bc6-a164-9093f895fbcf" containerName="controller-manager" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.127170 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.127183 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="53da86b0-43ce-4526-97db-a82df759ef58" containerName="route-controller-manager" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.127196 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="067e3491-3d3c-4bc6-a164-9093f895fbcf" containerName="controller-manager" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.127607 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.132734 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw"] Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.133457 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.139414 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53da86b0-43ce-4526-97db-a82df759ef58-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.139437 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b49lf\" (UniqueName: \"kubernetes.io/projected/53da86b0-43ce-4526-97db-a82df759ef58-kube-api-access-b49lf\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.176751 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6988db4df6-txndg"] Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.194784 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw"] Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.199448 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6988db4df6-txndg"] Dec 02 10:18:41 crc kubenswrapper[4711]: E1202 10:18:41.200080 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-glgc9 proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" podUID="424053b7-9704-4eb0-a193-e02c5989fad8" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.231056 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw"] Dec 02 10:18:41 crc kubenswrapper[4711]: E1202 10:18:41.231691 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-6df6k serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" podUID="2a0ad997-4a24-4e45-9b82-f64827eb20a3" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.240667 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-config\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.240733 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a0ad997-4a24-4e45-9b82-f64827eb20a3-client-ca\") pod \"route-controller-manager-6b588d959f-6g6zw\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.240776 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-client-ca\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.240805 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glgc9\" (UniqueName: \"kubernetes.io/projected/424053b7-9704-4eb0-a193-e02c5989fad8-kube-api-access-glgc9\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.240854 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/424053b7-9704-4eb0-a193-e02c5989fad8-serving-cert\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.240904 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6df6k\" (UniqueName: \"kubernetes.io/projected/2a0ad997-4a24-4e45-9b82-f64827eb20a3-kube-api-access-6df6k\") pod \"route-controller-manager-6b588d959f-6g6zw\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.241127 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a0ad997-4a24-4e45-9b82-f64827eb20a3-serving-cert\") pod \"route-controller-manager-6b588d959f-6g6zw\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.241192 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-proxy-ca-bundles\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.241247 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0ad997-4a24-4e45-9b82-f64827eb20a3-config\") pod \"route-controller-manager-6b588d959f-6g6zw\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.342884 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6df6k\" (UniqueName: \"kubernetes.io/projected/2a0ad997-4a24-4e45-9b82-f64827eb20a3-kube-api-access-6df6k\") pod \"route-controller-manager-6b588d959f-6g6zw\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.342977 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a0ad997-4a24-4e45-9b82-f64827eb20a3-serving-cert\") pod \"route-controller-manager-6b588d959f-6g6zw\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.343024 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-proxy-ca-bundles\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.343050 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0ad997-4a24-4e45-9b82-f64827eb20a3-config\") pod \"route-controller-manager-6b588d959f-6g6zw\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.343073 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-config\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.343115 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a0ad997-4a24-4e45-9b82-f64827eb20a3-client-ca\") pod \"route-controller-manager-6b588d959f-6g6zw\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.343158 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-client-ca\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.343219 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glgc9\" (UniqueName: \"kubernetes.io/projected/424053b7-9704-4eb0-a193-e02c5989fad8-kube-api-access-glgc9\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.343242 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/424053b7-9704-4eb0-a193-e02c5989fad8-serving-cert\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.344581 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-config\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.344683 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-client-ca\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.344878 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0ad997-4a24-4e45-9b82-f64827eb20a3-config\") pod \"route-controller-manager-6b588d959f-6g6zw\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.345024 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a0ad997-4a24-4e45-9b82-f64827eb20a3-client-ca\") pod \"route-controller-manager-6b588d959f-6g6zw\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.345447 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-proxy-ca-bundles\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.347209 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a0ad997-4a24-4e45-9b82-f64827eb20a3-serving-cert\") pod \"route-controller-manager-6b588d959f-6g6zw\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.352550 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/424053b7-9704-4eb0-a193-e02c5989fad8-serving-cert\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.358136 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glgc9\" (UniqueName: \"kubernetes.io/projected/424053b7-9704-4eb0-a193-e02c5989fad8-kube-api-access-glgc9\") pod \"controller-manager-6988db4df6-txndg\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.358941 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6df6k\" (UniqueName: \"kubernetes.io/projected/2a0ad997-4a24-4e45-9b82-f64827eb20a3-kube-api-access-6df6k\") pod \"route-controller-manager-6b588d959f-6g6zw\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.592456 4711 generic.go:334] "Generic (PLEG): container finished" podID="067e3491-3d3c-4bc6-a164-9093f895fbcf" containerID="15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f" exitCode=0 Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.592511 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.592585 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" event={"ID":"067e3491-3d3c-4bc6-a164-9093f895fbcf","Type":"ContainerDied","Data":"15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f"} Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.592709 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-49njp" event={"ID":"067e3491-3d3c-4bc6-a164-9093f895fbcf","Type":"ContainerDied","Data":"93c0d0dc9efe58616709b80ec2372ebc85a9ef8f0de6d7b2af02f3f7be5915ef"} Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.592814 4711 scope.go:117] "RemoveContainer" containerID="15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.594175 4711 generic.go:334] "Generic (PLEG): container finished" podID="53da86b0-43ce-4526-97db-a82df759ef58" containerID="afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b" exitCode=0 Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.594257 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.594272 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.594368 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.594241 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" event={"ID":"53da86b0-43ce-4526-97db-a82df759ef58","Type":"ContainerDied","Data":"afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b"} Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.594668 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m" event={"ID":"53da86b0-43ce-4526-97db-a82df759ef58","Type":"ContainerDied","Data":"2c0080af4d6aa7bdf23f1f658a3cf1a2ef1a42325d6758c23f66d2c4359d6bf6"} Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.610588 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.615702 4711 scope.go:117] "RemoveContainer" containerID="15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f" Dec 02 10:18:41 crc kubenswrapper[4711]: E1202 10:18:41.616771 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f\": container with ID starting with 15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f not found: ID does not exist" containerID="15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.616857 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f"} err="failed to get container status \"15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f\": rpc error: code = NotFound desc = could not find container \"15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f\": container with ID starting with 15ff9d4543f630bae62516776f7574f1d7d899a454a69bf4e8ce9e94d008c82f not found: ID does not exist" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.616902 4711 scope.go:117] "RemoveContainer" containerID="afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.629802 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-49njp"] Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.638139 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-49njp"] Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.665039 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m"] Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.683665 4711 scope.go:117] "RemoveContainer" containerID="afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.683767 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.684350 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjz2m"] Dec 02 10:18:41 crc kubenswrapper[4711]: E1202 10:18:41.684408 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b\": container with ID starting with afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b not found: ID does not exist" containerID="afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.684438 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b"} err="failed to get container status \"afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b\": rpc error: code = NotFound desc = could not find container \"afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b\": container with ID starting with afc451448fd521fece962a11ed3501fdffa12d1031a2f3f21ac5a268b67c374b not found: ID does not exist" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.747717 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0ad997-4a24-4e45-9b82-f64827eb20a3-config\") pod \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.747782 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-config\") pod \"424053b7-9704-4eb0-a193-e02c5989fad8\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.747828 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glgc9\" (UniqueName: \"kubernetes.io/projected/424053b7-9704-4eb0-a193-e02c5989fad8-kube-api-access-glgc9\") pod \"424053b7-9704-4eb0-a193-e02c5989fad8\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.747870 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a0ad997-4a24-4e45-9b82-f64827eb20a3-client-ca\") pod \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.747905 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/424053b7-9704-4eb0-a193-e02c5989fad8-serving-cert\") pod \"424053b7-9704-4eb0-a193-e02c5989fad8\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.747929 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a0ad997-4a24-4e45-9b82-f64827eb20a3-serving-cert\") pod \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.747977 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-client-ca\") pod \"424053b7-9704-4eb0-a193-e02c5989fad8\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.748032 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6df6k\" (UniqueName: \"kubernetes.io/projected/2a0ad997-4a24-4e45-9b82-f64827eb20a3-kube-api-access-6df6k\") pod \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\" (UID: \"2a0ad997-4a24-4e45-9b82-f64827eb20a3\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.748051 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-proxy-ca-bundles\") pod \"424053b7-9704-4eb0-a193-e02c5989fad8\" (UID: \"424053b7-9704-4eb0-a193-e02c5989fad8\") " Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.748759 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-client-ca" (OuterVolumeSpecName: "client-ca") pod "424053b7-9704-4eb0-a193-e02c5989fad8" (UID: "424053b7-9704-4eb0-a193-e02c5989fad8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.749053 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a0ad997-4a24-4e45-9b82-f64827eb20a3-config" (OuterVolumeSpecName: "config") pod "2a0ad997-4a24-4e45-9b82-f64827eb20a3" (UID: "2a0ad997-4a24-4e45-9b82-f64827eb20a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.749097 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-config" (OuterVolumeSpecName: "config") pod "424053b7-9704-4eb0-a193-e02c5989fad8" (UID: "424053b7-9704-4eb0-a193-e02c5989fad8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.749142 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a0ad997-4a24-4e45-9b82-f64827eb20a3-client-ca" (OuterVolumeSpecName: "client-ca") pod "2a0ad997-4a24-4e45-9b82-f64827eb20a3" (UID: "2a0ad997-4a24-4e45-9b82-f64827eb20a3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.749929 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "424053b7-9704-4eb0-a193-e02c5989fad8" (UID: "424053b7-9704-4eb0-a193-e02c5989fad8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.751545 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a0ad997-4a24-4e45-9b82-f64827eb20a3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2a0ad997-4a24-4e45-9b82-f64827eb20a3" (UID: "2a0ad997-4a24-4e45-9b82-f64827eb20a3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.751821 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424053b7-9704-4eb0-a193-e02c5989fad8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "424053b7-9704-4eb0-a193-e02c5989fad8" (UID: "424053b7-9704-4eb0-a193-e02c5989fad8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.754208 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/424053b7-9704-4eb0-a193-e02c5989fad8-kube-api-access-glgc9" (OuterVolumeSpecName: "kube-api-access-glgc9") pod "424053b7-9704-4eb0-a193-e02c5989fad8" (UID: "424053b7-9704-4eb0-a193-e02c5989fad8"). InnerVolumeSpecName "kube-api-access-glgc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.754437 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0ad997-4a24-4e45-9b82-f64827eb20a3-kube-api-access-6df6k" (OuterVolumeSpecName: "kube-api-access-6df6k") pod "2a0ad997-4a24-4e45-9b82-f64827eb20a3" (UID: "2a0ad997-4a24-4e45-9b82-f64827eb20a3"). InnerVolumeSpecName "kube-api-access-6df6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.850059 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0ad997-4a24-4e45-9b82-f64827eb20a3-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.850097 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.850108 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glgc9\" (UniqueName: \"kubernetes.io/projected/424053b7-9704-4eb0-a193-e02c5989fad8-kube-api-access-glgc9\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.850118 4711 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a0ad997-4a24-4e45-9b82-f64827eb20a3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.850130 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/424053b7-9704-4eb0-a193-e02c5989fad8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.850138 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a0ad997-4a24-4e45-9b82-f64827eb20a3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.850145 4711 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.850152 4711 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/424053b7-9704-4eb0-a193-e02c5989fad8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:41 crc kubenswrapper[4711]: I1202 10:18:41.850161 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6df6k\" (UniqueName: \"kubernetes.io/projected/2a0ad997-4a24-4e45-9b82-f64827eb20a3-kube-api-access-6df6k\") on node \"crc\" DevicePath \"\"" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.604439 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6988db4df6-txndg" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.604487 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.663627 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw"] Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.665007 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.669083 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw"] Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.672141 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.672924 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.673622 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.673856 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.674091 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.674322 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.676685 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b588d959f-6g6zw"] Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.683434 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw"] Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.711110 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6988db4df6-txndg"] Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.718799 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6988db4df6-txndg"] Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.763410 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc86d14b-7c7e-4068-914b-5cdce240460e-serving-cert\") pod \"route-controller-manager-7f44db6864-f6grw\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.763466 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc86d14b-7c7e-4068-914b-5cdce240460e-client-ca\") pod \"route-controller-manager-7f44db6864-f6grw\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.763498 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhk2\" (UniqueName: \"kubernetes.io/projected/cc86d14b-7c7e-4068-914b-5cdce240460e-kube-api-access-qrhk2\") pod \"route-controller-manager-7f44db6864-f6grw\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.763546 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc86d14b-7c7e-4068-914b-5cdce240460e-config\") pod \"route-controller-manager-7f44db6864-f6grw\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.864735 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc86d14b-7c7e-4068-914b-5cdce240460e-serving-cert\") pod \"route-controller-manager-7f44db6864-f6grw\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.865049 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc86d14b-7c7e-4068-914b-5cdce240460e-client-ca\") pod \"route-controller-manager-7f44db6864-f6grw\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.865076 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhk2\" (UniqueName: \"kubernetes.io/projected/cc86d14b-7c7e-4068-914b-5cdce240460e-kube-api-access-qrhk2\") pod \"route-controller-manager-7f44db6864-f6grw\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.865126 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc86d14b-7c7e-4068-914b-5cdce240460e-config\") pod \"route-controller-manager-7f44db6864-f6grw\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.866041 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc86d14b-7c7e-4068-914b-5cdce240460e-client-ca\") pod \"route-controller-manager-7f44db6864-f6grw\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.866246 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc86d14b-7c7e-4068-914b-5cdce240460e-config\") pod \"route-controller-manager-7f44db6864-f6grw\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.869421 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc86d14b-7c7e-4068-914b-5cdce240460e-serving-cert\") pod \"route-controller-manager-7f44db6864-f6grw\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.879407 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhk2\" (UniqueName: \"kubernetes.io/projected/cc86d14b-7c7e-4068-914b-5cdce240460e-kube-api-access-qrhk2\") pod \"route-controller-manager-7f44db6864-f6grw\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:42 crc kubenswrapper[4711]: I1202 10:18:42.985385 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:43 crc kubenswrapper[4711]: I1202 10:18:43.083903 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="067e3491-3d3c-4bc6-a164-9093f895fbcf" path="/var/lib/kubelet/pods/067e3491-3d3c-4bc6-a164-9093f895fbcf/volumes" Dec 02 10:18:43 crc kubenswrapper[4711]: I1202 10:18:43.084812 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a0ad997-4a24-4e45-9b82-f64827eb20a3" path="/var/lib/kubelet/pods/2a0ad997-4a24-4e45-9b82-f64827eb20a3/volumes" Dec 02 10:18:43 crc kubenswrapper[4711]: I1202 10:18:43.085260 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="424053b7-9704-4eb0-a193-e02c5989fad8" path="/var/lib/kubelet/pods/424053b7-9704-4eb0-a193-e02c5989fad8/volumes" Dec 02 10:18:43 crc kubenswrapper[4711]: I1202 10:18:43.085692 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53da86b0-43ce-4526-97db-a82df759ef58" path="/var/lib/kubelet/pods/53da86b0-43ce-4526-97db-a82df759ef58/volumes" Dec 02 10:18:43 crc kubenswrapper[4711]: I1202 10:18:43.242919 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw"] Dec 02 10:18:43 crc kubenswrapper[4711]: I1202 10:18:43.611064 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" event={"ID":"cc86d14b-7c7e-4068-914b-5cdce240460e","Type":"ContainerStarted","Data":"d1ed5a3bf951ba5eeb7f974a8cdbe52e15b16e303802cc93a3ddca7f6ff67a57"} Dec 02 10:18:43 crc kubenswrapper[4711]: I1202 10:18:43.612748 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:43 crc kubenswrapper[4711]: I1202 10:18:43.612784 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" event={"ID":"cc86d14b-7c7e-4068-914b-5cdce240460e","Type":"ContainerStarted","Data":"959ec5a78547524765a3afca0fbc0d9398d2d18fc95fb3681fbd34531f9ba9fd"} Dec 02 10:18:43 crc kubenswrapper[4711]: I1202 10:18:43.691066 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:18:43 crc kubenswrapper[4711]: I1202 10:18:43.709754 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" podStartSLOduration=2.709727779 podStartE2EDuration="2.709727779s" podCreationTimestamp="2025-12-02 10:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:18:43.63967276 +0000 UTC m=+313.349039197" watchObservedRunningTime="2025-12-02 10:18:43.709727779 +0000 UTC m=+313.419094236" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.571916 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78855b4d5f-4dtsh"] Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.574250 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.578182 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.579047 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.581543 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.583047 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.585465 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.585876 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.589783 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78855b4d5f-4dtsh"] Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.598861 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.706573 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2bd22d4-d56f-442a-96ce-6f3246aed105-serving-cert\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.706668 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-config\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.706779 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-client-ca\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.707234 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-proxy-ca-bundles\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.707266 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtwrx\" (UniqueName: \"kubernetes.io/projected/b2bd22d4-d56f-442a-96ce-6f3246aed105-kube-api-access-jtwrx\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.809167 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-config\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.809264 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-client-ca\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.809377 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-proxy-ca-bundles\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.809462 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtwrx\" (UniqueName: \"kubernetes.io/projected/b2bd22d4-d56f-442a-96ce-6f3246aed105-kube-api-access-jtwrx\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.809633 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2bd22d4-d56f-442a-96ce-6f3246aed105-serving-cert\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.811395 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-client-ca\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.811633 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-config\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.812932 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-proxy-ca-bundles\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.828501 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2bd22d4-d56f-442a-96ce-6f3246aed105-serving-cert\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.832831 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtwrx\" (UniqueName: \"kubernetes.io/projected/b2bd22d4-d56f-442a-96ce-6f3246aed105-kube-api-access-jtwrx\") pod \"controller-manager-78855b4d5f-4dtsh\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:45 crc kubenswrapper[4711]: I1202 10:18:45.910647 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:46 crc kubenswrapper[4711]: I1202 10:18:46.333100 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78855b4d5f-4dtsh"] Dec 02 10:18:46 crc kubenswrapper[4711]: I1202 10:18:46.630733 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" event={"ID":"b2bd22d4-d56f-442a-96ce-6f3246aed105","Type":"ContainerStarted","Data":"b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0"} Dec 02 10:18:46 crc kubenswrapper[4711]: I1202 10:18:46.631347 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:46 crc kubenswrapper[4711]: I1202 10:18:46.631366 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" event={"ID":"b2bd22d4-d56f-442a-96ce-6f3246aed105","Type":"ContainerStarted","Data":"e3e9578dbe260d5ae3bacc789fc4ec0fdd438044356fc5d5d07aba2cb7933c96"} Dec 02 10:18:46 crc kubenswrapper[4711]: I1202 10:18:46.640585 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:18:46 crc kubenswrapper[4711]: I1202 10:18:46.657259 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" podStartSLOduration=5.657179756 podStartE2EDuration="5.657179756s" podCreationTimestamp="2025-12-02 10:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:18:46.655050649 +0000 UTC m=+316.364417186" watchObservedRunningTime="2025-12-02 10:18:46.657179756 +0000 UTC m=+316.366546213" Dec 02 10:18:47 crc kubenswrapper[4711]: I1202 10:18:47.670200 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 02 10:18:55 crc kubenswrapper[4711]: I1202 10:18:55.551555 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 02 10:19:00 crc kubenswrapper[4711]: I1202 10:19:00.484000 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw"] Dec 02 10:19:00 crc kubenswrapper[4711]: I1202 10:19:00.484997 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" podUID="cc86d14b-7c7e-4068-914b-5cdce240460e" containerName="route-controller-manager" containerID="cri-o://d1ed5a3bf951ba5eeb7f974a8cdbe52e15b16e303802cc93a3ddca7f6ff67a57" gracePeriod=30 Dec 02 10:19:00 crc kubenswrapper[4711]: I1202 10:19:00.723678 4711 generic.go:334] "Generic (PLEG): container finished" podID="cc86d14b-7c7e-4068-914b-5cdce240460e" containerID="d1ed5a3bf951ba5eeb7f974a8cdbe52e15b16e303802cc93a3ddca7f6ff67a57" exitCode=0 Dec 02 10:19:00 crc kubenswrapper[4711]: I1202 10:19:00.723854 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" event={"ID":"cc86d14b-7c7e-4068-914b-5cdce240460e","Type":"ContainerDied","Data":"d1ed5a3bf951ba5eeb7f974a8cdbe52e15b16e303802cc93a3ddca7f6ff67a57"} Dec 02 10:19:00 crc kubenswrapper[4711]: I1202 10:19:00.937649 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.013121 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc86d14b-7c7e-4068-914b-5cdce240460e-config\") pod \"cc86d14b-7c7e-4068-914b-5cdce240460e\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.013296 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrhk2\" (UniqueName: \"kubernetes.io/projected/cc86d14b-7c7e-4068-914b-5cdce240460e-kube-api-access-qrhk2\") pod \"cc86d14b-7c7e-4068-914b-5cdce240460e\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.013348 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc86d14b-7c7e-4068-914b-5cdce240460e-serving-cert\") pod \"cc86d14b-7c7e-4068-914b-5cdce240460e\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.013382 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc86d14b-7c7e-4068-914b-5cdce240460e-client-ca\") pod \"cc86d14b-7c7e-4068-914b-5cdce240460e\" (UID: \"cc86d14b-7c7e-4068-914b-5cdce240460e\") " Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.014285 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc86d14b-7c7e-4068-914b-5cdce240460e-config" (OuterVolumeSpecName: "config") pod "cc86d14b-7c7e-4068-914b-5cdce240460e" (UID: "cc86d14b-7c7e-4068-914b-5cdce240460e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.014337 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc86d14b-7c7e-4068-914b-5cdce240460e-client-ca" (OuterVolumeSpecName: "client-ca") pod "cc86d14b-7c7e-4068-914b-5cdce240460e" (UID: "cc86d14b-7c7e-4068-914b-5cdce240460e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.020723 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc86d14b-7c7e-4068-914b-5cdce240460e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cc86d14b-7c7e-4068-914b-5cdce240460e" (UID: "cc86d14b-7c7e-4068-914b-5cdce240460e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.021234 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc86d14b-7c7e-4068-914b-5cdce240460e-kube-api-access-qrhk2" (OuterVolumeSpecName: "kube-api-access-qrhk2") pod "cc86d14b-7c7e-4068-914b-5cdce240460e" (UID: "cc86d14b-7c7e-4068-914b-5cdce240460e"). InnerVolumeSpecName "kube-api-access-qrhk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.114739 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrhk2\" (UniqueName: \"kubernetes.io/projected/cc86d14b-7c7e-4068-914b-5cdce240460e-kube-api-access-qrhk2\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.114780 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc86d14b-7c7e-4068-914b-5cdce240460e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.114793 4711 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc86d14b-7c7e-4068-914b-5cdce240460e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.114804 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc86d14b-7c7e-4068-914b-5cdce240460e-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.572605 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72"] Dec 02 10:19:01 crc kubenswrapper[4711]: E1202 10:19:01.572855 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc86d14b-7c7e-4068-914b-5cdce240460e" containerName="route-controller-manager" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.572875 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc86d14b-7c7e-4068-914b-5cdce240460e" containerName="route-controller-manager" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.573010 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc86d14b-7c7e-4068-914b-5cdce240460e" containerName="route-controller-manager" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.573436 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.585703 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72"] Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.620094 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0949c4a-b392-4812-956a-a9db5e46ca0d-client-ca\") pod \"route-controller-manager-6b588d959f-2rt72\" (UID: \"f0949c4a-b392-4812-956a-a9db5e46ca0d\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.620160 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmggp\" (UniqueName: \"kubernetes.io/projected/f0949c4a-b392-4812-956a-a9db5e46ca0d-kube-api-access-mmggp\") pod \"route-controller-manager-6b588d959f-2rt72\" (UID: \"f0949c4a-b392-4812-956a-a9db5e46ca0d\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.620255 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0949c4a-b392-4812-956a-a9db5e46ca0d-config\") pod \"route-controller-manager-6b588d959f-2rt72\" (UID: \"f0949c4a-b392-4812-956a-a9db5e46ca0d\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.620281 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0949c4a-b392-4812-956a-a9db5e46ca0d-serving-cert\") pod \"route-controller-manager-6b588d959f-2rt72\" (UID: \"f0949c4a-b392-4812-956a-a9db5e46ca0d\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.721150 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0949c4a-b392-4812-956a-a9db5e46ca0d-client-ca\") pod \"route-controller-manager-6b588d959f-2rt72\" (UID: \"f0949c4a-b392-4812-956a-a9db5e46ca0d\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.721238 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmggp\" (UniqueName: \"kubernetes.io/projected/f0949c4a-b392-4812-956a-a9db5e46ca0d-kube-api-access-mmggp\") pod \"route-controller-manager-6b588d959f-2rt72\" (UID: \"f0949c4a-b392-4812-956a-a9db5e46ca0d\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.721381 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0949c4a-b392-4812-956a-a9db5e46ca0d-config\") pod \"route-controller-manager-6b588d959f-2rt72\" (UID: \"f0949c4a-b392-4812-956a-a9db5e46ca0d\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.721431 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0949c4a-b392-4812-956a-a9db5e46ca0d-serving-cert\") pod \"route-controller-manager-6b588d959f-2rt72\" (UID: \"f0949c4a-b392-4812-956a-a9db5e46ca0d\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.722357 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0949c4a-b392-4812-956a-a9db5e46ca0d-client-ca\") pod \"route-controller-manager-6b588d959f-2rt72\" (UID: \"f0949c4a-b392-4812-956a-a9db5e46ca0d\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.722748 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0949c4a-b392-4812-956a-a9db5e46ca0d-config\") pod \"route-controller-manager-6b588d959f-2rt72\" (UID: \"f0949c4a-b392-4812-956a-a9db5e46ca0d\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.729780 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0949c4a-b392-4812-956a-a9db5e46ca0d-serving-cert\") pod \"route-controller-manager-6b588d959f-2rt72\" (UID: \"f0949c4a-b392-4812-956a-a9db5e46ca0d\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.735587 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" event={"ID":"cc86d14b-7c7e-4068-914b-5cdce240460e","Type":"ContainerDied","Data":"959ec5a78547524765a3afca0fbc0d9398d2d18fc95fb3681fbd34531f9ba9fd"} Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.735675 4711 scope.go:117] "RemoveContainer" containerID="d1ed5a3bf951ba5eeb7f974a8cdbe52e15b16e303802cc93a3ddca7f6ff67a57" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.735694 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.750874 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmggp\" (UniqueName: \"kubernetes.io/projected/f0949c4a-b392-4812-956a-a9db5e46ca0d-kube-api-access-mmggp\") pod \"route-controller-manager-6b588d959f-2rt72\" (UID: \"f0949c4a-b392-4812-956a-a9db5e46ca0d\") " pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.785170 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw"] Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.789277 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f44db6864-f6grw"] Dec 02 10:19:01 crc kubenswrapper[4711]: I1202 10:19:01.911183 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:02 crc kubenswrapper[4711]: I1202 10:19:02.370307 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72"] Dec 02 10:19:02 crc kubenswrapper[4711]: W1202 10:19:02.377541 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0949c4a_b392_4812_956a_a9db5e46ca0d.slice/crio-996146f69918aee76415246215a0cf8e59289d9cce47a913d878e533e92cd07f WatchSource:0}: Error finding container 996146f69918aee76415246215a0cf8e59289d9cce47a913d878e533e92cd07f: Status 404 returned error can't find the container with id 996146f69918aee76415246215a0cf8e59289d9cce47a913d878e533e92cd07f Dec 02 10:19:02 crc kubenswrapper[4711]: I1202 10:19:02.746513 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" event={"ID":"f0949c4a-b392-4812-956a-a9db5e46ca0d","Type":"ContainerStarted","Data":"e625b6c0c0adc88de6455b6c1177359a8d94944fcd2966149eee12ba2c0ac231"} Dec 02 10:19:02 crc kubenswrapper[4711]: I1202 10:19:02.746591 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" event={"ID":"f0949c4a-b392-4812-956a-a9db5e46ca0d","Type":"ContainerStarted","Data":"996146f69918aee76415246215a0cf8e59289d9cce47a913d878e533e92cd07f"} Dec 02 10:19:02 crc kubenswrapper[4711]: I1202 10:19:02.747866 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:02 crc kubenswrapper[4711]: I1202 10:19:02.770662 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" podStartSLOduration=2.77061243 podStartE2EDuration="2.77061243s" podCreationTimestamp="2025-12-02 10:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:19:02.769819158 +0000 UTC m=+332.479185635" watchObservedRunningTime="2025-12-02 10:19:02.77061243 +0000 UTC m=+332.479978897" Dec 02 10:19:02 crc kubenswrapper[4711]: I1202 10:19:02.920363 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b588d959f-2rt72" Dec 02 10:19:03 crc kubenswrapper[4711]: I1202 10:19:03.085355 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc86d14b-7c7e-4068-914b-5cdce240460e" path="/var/lib/kubelet/pods/cc86d14b-7c7e-4068-914b-5cdce240460e/volumes" Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.876656 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tb9pv"] Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.877407 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tb9pv" podUID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" containerName="registry-server" containerID="cri-o://cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539" gracePeriod=30 Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.887934 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7l5th"] Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.888233 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7l5th" podUID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" containerName="registry-server" containerID="cri-o://d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58" gracePeriod=30 Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.893799 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m5tws"] Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.894108 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" podUID="8e8230b2-fb50-43c7-8a69-af1d02cce895" containerName="marketplace-operator" containerID="cri-o://d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142" gracePeriod=30 Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.906643 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grd8v"] Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.906929 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-grd8v" podUID="0c729b20-2f40-43b6-8432-062f8a6cce37" containerName="registry-server" containerID="cri-o://19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7" gracePeriod=30 Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.910410 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z4fwd"] Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.911197 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.913240 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7fgpt"] Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.914353 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7fgpt" podUID="a3c13742-8972-4320-8b08-ced2c55156d3" containerName="registry-server" containerID="cri-o://d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f" gracePeriod=30 Dec 02 10:19:07 crc kubenswrapper[4711]: I1202 10:19:07.921716 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z4fwd"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.007689 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc27e106-dc06-4326-9cc4-99ca9b5206bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-z4fwd\" (UID: \"bc27e106-dc06-4326-9cc4-99ca9b5206bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.007890 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbhq9\" (UniqueName: \"kubernetes.io/projected/bc27e106-dc06-4326-9cc4-99ca9b5206bb-kube-api-access-lbhq9\") pod \"marketplace-operator-79b997595-z4fwd\" (UID: \"bc27e106-dc06-4326-9cc4-99ca9b5206bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.008130 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bc27e106-dc06-4326-9cc4-99ca9b5206bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-z4fwd\" (UID: \"bc27e106-dc06-4326-9cc4-99ca9b5206bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.109912 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bc27e106-dc06-4326-9cc4-99ca9b5206bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-z4fwd\" (UID: \"bc27e106-dc06-4326-9cc4-99ca9b5206bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.110043 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc27e106-dc06-4326-9cc4-99ca9b5206bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-z4fwd\" (UID: \"bc27e106-dc06-4326-9cc4-99ca9b5206bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.110068 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbhq9\" (UniqueName: \"kubernetes.io/projected/bc27e106-dc06-4326-9cc4-99ca9b5206bb-kube-api-access-lbhq9\") pod \"marketplace-operator-79b997595-z4fwd\" (UID: \"bc27e106-dc06-4326-9cc4-99ca9b5206bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.111645 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc27e106-dc06-4326-9cc4-99ca9b5206bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-z4fwd\" (UID: \"bc27e106-dc06-4326-9cc4-99ca9b5206bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.116859 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bc27e106-dc06-4326-9cc4-99ca9b5206bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-z4fwd\" (UID: \"bc27e106-dc06-4326-9cc4-99ca9b5206bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.125580 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbhq9\" (UniqueName: \"kubernetes.io/projected/bc27e106-dc06-4326-9cc4-99ca9b5206bb-kube-api-access-lbhq9\") pod \"marketplace-operator-79b997595-z4fwd\" (UID: \"bc27e106-dc06-4326-9cc4-99ca9b5206bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.279360 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.436796 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.522495 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-728rt\" (UniqueName: \"kubernetes.io/projected/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-kube-api-access-728rt\") pod \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\" (UID: \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.522643 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-utilities\") pod \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\" (UID: \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.522706 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-catalog-content\") pod \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\" (UID: \"ef385405-9334-4ba2-a7ed-abd9d51cbd5d\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.533061 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-kube-api-access-728rt" (OuterVolumeSpecName: "kube-api-access-728rt") pod "ef385405-9334-4ba2-a7ed-abd9d51cbd5d" (UID: "ef385405-9334-4ba2-a7ed-abd9d51cbd5d"). InnerVolumeSpecName "kube-api-access-728rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.535348 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-utilities" (OuterVolumeSpecName: "utilities") pod "ef385405-9334-4ba2-a7ed-abd9d51cbd5d" (UID: "ef385405-9334-4ba2-a7ed-abd9d51cbd5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: E1202 10:19:08.537916 4711 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7 is running failed: container process not found" containerID="19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 10:19:08 crc kubenswrapper[4711]: E1202 10:19:08.538259 4711 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7 is running failed: container process not found" containerID="19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.538436 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:19:08 crc kubenswrapper[4711]: E1202 10:19:08.538550 4711 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7 is running failed: container process not found" containerID="19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 10:19:08 crc kubenswrapper[4711]: E1202 10:19:08.538608 4711 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-grd8v" podUID="0c729b20-2f40-43b6-8432-062f8a6cce37" containerName="registry-server" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.565494 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.568127 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.579786 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.623381 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jctq\" (UniqueName: \"kubernetes.io/projected/8e8230b2-fb50-43c7-8a69-af1d02cce895-kube-api-access-6jctq\") pod \"8e8230b2-fb50-43c7-8a69-af1d02cce895\" (UID: \"8e8230b2-fb50-43c7-8a69-af1d02cce895\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.623703 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e8230b2-fb50-43c7-8a69-af1d02cce895-marketplace-trusted-ca\") pod \"8e8230b2-fb50-43c7-8a69-af1d02cce895\" (UID: \"8e8230b2-fb50-43c7-8a69-af1d02cce895\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.623824 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37a3481-62b0-42fd-b6c9-198f0e5aac93-catalog-content\") pod \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\" (UID: \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.623918 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c13742-8972-4320-8b08-ced2c55156d3-utilities\") pod \"a3c13742-8972-4320-8b08-ced2c55156d3\" (UID: \"a3c13742-8972-4320-8b08-ced2c55156d3\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.624030 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8e8230b2-fb50-43c7-8a69-af1d02cce895-marketplace-operator-metrics\") pod \"8e8230b2-fb50-43c7-8a69-af1d02cce895\" (UID: \"8e8230b2-fb50-43c7-8a69-af1d02cce895\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.624111 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37a3481-62b0-42fd-b6c9-198f0e5aac93-utilities\") pod \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\" (UID: \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.624231 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blq7g\" (UniqueName: \"kubernetes.io/projected/a3c13742-8972-4320-8b08-ced2c55156d3-kube-api-access-blq7g\") pod \"a3c13742-8972-4320-8b08-ced2c55156d3\" (UID: \"a3c13742-8972-4320-8b08-ced2c55156d3\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.624351 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfn9x\" (UniqueName: \"kubernetes.io/projected/d37a3481-62b0-42fd-b6c9-198f0e5aac93-kube-api-access-nfn9x\") pod \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\" (UID: \"d37a3481-62b0-42fd-b6c9-198f0e5aac93\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.624430 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c13742-8972-4320-8b08-ced2c55156d3-catalog-content\") pod \"a3c13742-8972-4320-8b08-ced2c55156d3\" (UID: \"a3c13742-8972-4320-8b08-ced2c55156d3\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.624690 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-728rt\" (UniqueName: \"kubernetes.io/projected/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-kube-api-access-728rt\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.624786 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.624831 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8230b2-fb50-43c7-8a69-af1d02cce895-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8e8230b2-fb50-43c7-8a69-af1d02cce895" (UID: "8e8230b2-fb50-43c7-8a69-af1d02cce895"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.626579 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d37a3481-62b0-42fd-b6c9-198f0e5aac93-utilities" (OuterVolumeSpecName: "utilities") pod "d37a3481-62b0-42fd-b6c9-198f0e5aac93" (UID: "d37a3481-62b0-42fd-b6c9-198f0e5aac93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.628007 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8230b2-fb50-43c7-8a69-af1d02cce895-kube-api-access-6jctq" (OuterVolumeSpecName: "kube-api-access-6jctq") pod "8e8230b2-fb50-43c7-8a69-af1d02cce895" (UID: "8e8230b2-fb50-43c7-8a69-af1d02cce895"). InnerVolumeSpecName "kube-api-access-6jctq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.629173 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3c13742-8972-4320-8b08-ced2c55156d3-utilities" (OuterVolumeSpecName: "utilities") pod "a3c13742-8972-4320-8b08-ced2c55156d3" (UID: "a3c13742-8972-4320-8b08-ced2c55156d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.631716 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d37a3481-62b0-42fd-b6c9-198f0e5aac93-kube-api-access-nfn9x" (OuterVolumeSpecName: "kube-api-access-nfn9x") pod "d37a3481-62b0-42fd-b6c9-198f0e5aac93" (UID: "d37a3481-62b0-42fd-b6c9-198f0e5aac93"). InnerVolumeSpecName "kube-api-access-nfn9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.632343 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8230b2-fb50-43c7-8a69-af1d02cce895-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8e8230b2-fb50-43c7-8a69-af1d02cce895" (UID: "8e8230b2-fb50-43c7-8a69-af1d02cce895"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.633790 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c13742-8972-4320-8b08-ced2c55156d3-kube-api-access-blq7g" (OuterVolumeSpecName: "kube-api-access-blq7g") pod "a3c13742-8972-4320-8b08-ced2c55156d3" (UID: "a3c13742-8972-4320-8b08-ced2c55156d3"). InnerVolumeSpecName "kube-api-access-blq7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.652714 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef385405-9334-4ba2-a7ed-abd9d51cbd5d" (UID: "ef385405-9334-4ba2-a7ed-abd9d51cbd5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.695552 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d37a3481-62b0-42fd-b6c9-198f0e5aac93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d37a3481-62b0-42fd-b6c9-198f0e5aac93" (UID: "d37a3481-62b0-42fd-b6c9-198f0e5aac93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.725484 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c729b20-2f40-43b6-8432-062f8a6cce37-utilities\") pod \"0c729b20-2f40-43b6-8432-062f8a6cce37\" (UID: \"0c729b20-2f40-43b6-8432-062f8a6cce37\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.725632 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c729b20-2f40-43b6-8432-062f8a6cce37-catalog-content\") pod \"0c729b20-2f40-43b6-8432-062f8a6cce37\" (UID: \"0c729b20-2f40-43b6-8432-062f8a6cce37\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.725680 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frsxm\" (UniqueName: \"kubernetes.io/projected/0c729b20-2f40-43b6-8432-062f8a6cce37-kube-api-access-frsxm\") pod \"0c729b20-2f40-43b6-8432-062f8a6cce37\" (UID: \"0c729b20-2f40-43b6-8432-062f8a6cce37\") " Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.725987 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blq7g\" (UniqueName: \"kubernetes.io/projected/a3c13742-8972-4320-8b08-ced2c55156d3-kube-api-access-blq7g\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.726004 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfn9x\" (UniqueName: \"kubernetes.io/projected/d37a3481-62b0-42fd-b6c9-198f0e5aac93-kube-api-access-nfn9x\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.726013 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jctq\" (UniqueName: \"kubernetes.io/projected/8e8230b2-fb50-43c7-8a69-af1d02cce895-kube-api-access-6jctq\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.726022 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef385405-9334-4ba2-a7ed-abd9d51cbd5d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.726030 4711 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e8230b2-fb50-43c7-8a69-af1d02cce895-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.726054 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37a3481-62b0-42fd-b6c9-198f0e5aac93-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.726065 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c13742-8972-4320-8b08-ced2c55156d3-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.726074 4711 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8e8230b2-fb50-43c7-8a69-af1d02cce895-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.726082 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37a3481-62b0-42fd-b6c9-198f0e5aac93-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.726366 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c729b20-2f40-43b6-8432-062f8a6cce37-utilities" (OuterVolumeSpecName: "utilities") pod "0c729b20-2f40-43b6-8432-062f8a6cce37" (UID: "0c729b20-2f40-43b6-8432-062f8a6cce37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.728847 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c729b20-2f40-43b6-8432-062f8a6cce37-kube-api-access-frsxm" (OuterVolumeSpecName: "kube-api-access-frsxm") pod "0c729b20-2f40-43b6-8432-062f8a6cce37" (UID: "0c729b20-2f40-43b6-8432-062f8a6cce37"). InnerVolumeSpecName "kube-api-access-frsxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.744069 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c729b20-2f40-43b6-8432-062f8a6cce37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c729b20-2f40-43b6-8432-062f8a6cce37" (UID: "0c729b20-2f40-43b6-8432-062f8a6cce37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.749548 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3c13742-8972-4320-8b08-ced2c55156d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3c13742-8972-4320-8b08-ced2c55156d3" (UID: "a3c13742-8972-4320-8b08-ced2c55156d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.798821 4711 generic.go:334] "Generic (PLEG): container finished" podID="a3c13742-8972-4320-8b08-ced2c55156d3" containerID="d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f" exitCode=0 Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.798866 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fgpt" event={"ID":"a3c13742-8972-4320-8b08-ced2c55156d3","Type":"ContainerDied","Data":"d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f"} Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.798932 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fgpt" event={"ID":"a3c13742-8972-4320-8b08-ced2c55156d3","Type":"ContainerDied","Data":"fe60fe066a5e5af5481a75b2be40efd7986c1d926696fe6007c399599f9d1c63"} Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.798970 4711 scope.go:117] "RemoveContainer" containerID="d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.799070 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fgpt" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.801462 4711 generic.go:334] "Generic (PLEG): container finished" podID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" containerID="cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539" exitCode=0 Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.801556 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb9pv" event={"ID":"ef385405-9334-4ba2-a7ed-abd9d51cbd5d","Type":"ContainerDied","Data":"cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539"} Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.801606 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tb9pv" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.801614 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tb9pv" event={"ID":"ef385405-9334-4ba2-a7ed-abd9d51cbd5d","Type":"ContainerDied","Data":"67791e10e6ec82588058d250bd9ed0e25b8eb2bc7100bd270ddeafda46ab00dc"} Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.806432 4711 generic.go:334] "Generic (PLEG): container finished" podID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" containerID="d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58" exitCode=0 Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.806520 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7l5th" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.806520 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7l5th" event={"ID":"d37a3481-62b0-42fd-b6c9-198f0e5aac93","Type":"ContainerDied","Data":"d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58"} Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.806663 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7l5th" event={"ID":"d37a3481-62b0-42fd-b6c9-198f0e5aac93","Type":"ContainerDied","Data":"c3660ee566566ece5bb4d9d6e041ddc35020d0c2a636e9efc7e63caffcf16d40"} Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.810843 4711 generic.go:334] "Generic (PLEG): container finished" podID="0c729b20-2f40-43b6-8432-062f8a6cce37" containerID="19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7" exitCode=0 Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.810891 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grd8v" event={"ID":"0c729b20-2f40-43b6-8432-062f8a6cce37","Type":"ContainerDied","Data":"19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7"} Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.810907 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grd8v" event={"ID":"0c729b20-2f40-43b6-8432-062f8a6cce37","Type":"ContainerDied","Data":"915ddd5e36cb156220a1d0b907e4fc71ae4f39cde81572096ee79bee0d922e9c"} Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.810983 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grd8v" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.815432 4711 generic.go:334] "Generic (PLEG): container finished" podID="8e8230b2-fb50-43c7-8a69-af1d02cce895" containerID="d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142" exitCode=0 Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.815466 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" event={"ID":"8e8230b2-fb50-43c7-8a69-af1d02cce895","Type":"ContainerDied","Data":"d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142"} Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.815488 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" event={"ID":"8e8230b2-fb50-43c7-8a69-af1d02cce895","Type":"ContainerDied","Data":"3b56e7ae646cc9a25c9d018cfdd203cb17225d317d76da294070a62bab2b51fa"} Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.815533 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m5tws" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.816239 4711 scope.go:117] "RemoveContainer" containerID="703786e75f03f384e169484bfed3ffb32fe1a1d8c600363db507688c75d90b98" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.826688 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c729b20-2f40-43b6-8432-062f8a6cce37-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.826711 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frsxm\" (UniqueName: \"kubernetes.io/projected/0c729b20-2f40-43b6-8432-062f8a6cce37-kube-api-access-frsxm\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.826723 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c13742-8972-4320-8b08-ced2c55156d3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.826731 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c729b20-2f40-43b6-8432-062f8a6cce37-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.841169 4711 scope.go:117] "RemoveContainer" containerID="e02acd59809fd9266f4df27e3908dfda9ef7f2eaf8fceb6b54e4f747c9026f07" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.859839 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7fgpt"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.868261 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7fgpt"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.874745 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z4fwd"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.882564 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7l5th"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.892411 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7l5th"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.899626 4711 scope.go:117] "RemoveContainer" containerID="d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f" Dec 02 10:19:08 crc kubenswrapper[4711]: E1202 10:19:08.900640 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f\": container with ID starting with d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f not found: ID does not exist" containerID="d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.900850 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f"} err="failed to get container status \"d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f\": rpc error: code = NotFound desc = could not find container \"d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f\": container with ID starting with d950a6d9952100249adba2c0495354ae6eb645e605290d4134534de608f1b14f not found: ID does not exist" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.900991 4711 scope.go:117] "RemoveContainer" containerID="703786e75f03f384e169484bfed3ffb32fe1a1d8c600363db507688c75d90b98" Dec 02 10:19:08 crc kubenswrapper[4711]: E1202 10:19:08.901444 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"703786e75f03f384e169484bfed3ffb32fe1a1d8c600363db507688c75d90b98\": container with ID starting with 703786e75f03f384e169484bfed3ffb32fe1a1d8c600363db507688c75d90b98 not found: ID does not exist" containerID="703786e75f03f384e169484bfed3ffb32fe1a1d8c600363db507688c75d90b98" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.901505 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703786e75f03f384e169484bfed3ffb32fe1a1d8c600363db507688c75d90b98"} err="failed to get container status \"703786e75f03f384e169484bfed3ffb32fe1a1d8c600363db507688c75d90b98\": rpc error: code = NotFound desc = could not find container \"703786e75f03f384e169484bfed3ffb32fe1a1d8c600363db507688c75d90b98\": container with ID starting with 703786e75f03f384e169484bfed3ffb32fe1a1d8c600363db507688c75d90b98 not found: ID does not exist" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.901527 4711 scope.go:117] "RemoveContainer" containerID="e02acd59809fd9266f4df27e3908dfda9ef7f2eaf8fceb6b54e4f747c9026f07" Dec 02 10:19:08 crc kubenswrapper[4711]: E1202 10:19:08.901780 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e02acd59809fd9266f4df27e3908dfda9ef7f2eaf8fceb6b54e4f747c9026f07\": container with ID starting with e02acd59809fd9266f4df27e3908dfda9ef7f2eaf8fceb6b54e4f747c9026f07 not found: ID does not exist" containerID="e02acd59809fd9266f4df27e3908dfda9ef7f2eaf8fceb6b54e4f747c9026f07" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.901889 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e02acd59809fd9266f4df27e3908dfda9ef7f2eaf8fceb6b54e4f747c9026f07"} err="failed to get container status \"e02acd59809fd9266f4df27e3908dfda9ef7f2eaf8fceb6b54e4f747c9026f07\": rpc error: code = NotFound desc = could not find container \"e02acd59809fd9266f4df27e3908dfda9ef7f2eaf8fceb6b54e4f747c9026f07\": container with ID starting with e02acd59809fd9266f4df27e3908dfda9ef7f2eaf8fceb6b54e4f747c9026f07 not found: ID does not exist" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.901991 4711 scope.go:117] "RemoveContainer" containerID="cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.902003 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tb9pv"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.911551 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tb9pv"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.916787 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grd8v"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.919443 4711 scope.go:117] "RemoveContainer" containerID="6b7ebab21c4afeccd07bf8f26c12e9b067bfdd7b240a81bd27a9ab0c035353d1" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.921182 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-grd8v"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.927518 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m5tws"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.931291 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m5tws"] Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.941559 4711 scope.go:117] "RemoveContainer" containerID="397ba8b2604ea168b294cf201afcff7a6dce33b7c994b30c0cb7006b78c12fa8" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.971369 4711 scope.go:117] "RemoveContainer" containerID="cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539" Dec 02 10:19:08 crc kubenswrapper[4711]: E1202 10:19:08.971779 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539\": container with ID starting with cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539 not found: ID does not exist" containerID="cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.971839 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539"} err="failed to get container status \"cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539\": rpc error: code = NotFound desc = could not find container \"cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539\": container with ID starting with cc7378c970fe2016cd08730ad20ef7cc2c1574c3a0fd260a17e1d079a5577539 not found: ID does not exist" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.971861 4711 scope.go:117] "RemoveContainer" containerID="6b7ebab21c4afeccd07bf8f26c12e9b067bfdd7b240a81bd27a9ab0c035353d1" Dec 02 10:19:08 crc kubenswrapper[4711]: E1202 10:19:08.972186 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b7ebab21c4afeccd07bf8f26c12e9b067bfdd7b240a81bd27a9ab0c035353d1\": container with ID starting with 6b7ebab21c4afeccd07bf8f26c12e9b067bfdd7b240a81bd27a9ab0c035353d1 not found: ID does not exist" containerID="6b7ebab21c4afeccd07bf8f26c12e9b067bfdd7b240a81bd27a9ab0c035353d1" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.972203 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b7ebab21c4afeccd07bf8f26c12e9b067bfdd7b240a81bd27a9ab0c035353d1"} err="failed to get container status \"6b7ebab21c4afeccd07bf8f26c12e9b067bfdd7b240a81bd27a9ab0c035353d1\": rpc error: code = NotFound desc = could not find container \"6b7ebab21c4afeccd07bf8f26c12e9b067bfdd7b240a81bd27a9ab0c035353d1\": container with ID starting with 6b7ebab21c4afeccd07bf8f26c12e9b067bfdd7b240a81bd27a9ab0c035353d1 not found: ID does not exist" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.972220 4711 scope.go:117] "RemoveContainer" containerID="397ba8b2604ea168b294cf201afcff7a6dce33b7c994b30c0cb7006b78c12fa8" Dec 02 10:19:08 crc kubenswrapper[4711]: E1202 10:19:08.972511 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"397ba8b2604ea168b294cf201afcff7a6dce33b7c994b30c0cb7006b78c12fa8\": container with ID starting with 397ba8b2604ea168b294cf201afcff7a6dce33b7c994b30c0cb7006b78c12fa8 not found: ID does not exist" containerID="397ba8b2604ea168b294cf201afcff7a6dce33b7c994b30c0cb7006b78c12fa8" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.972540 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397ba8b2604ea168b294cf201afcff7a6dce33b7c994b30c0cb7006b78c12fa8"} err="failed to get container status \"397ba8b2604ea168b294cf201afcff7a6dce33b7c994b30c0cb7006b78c12fa8\": rpc error: code = NotFound desc = could not find container \"397ba8b2604ea168b294cf201afcff7a6dce33b7c994b30c0cb7006b78c12fa8\": container with ID starting with 397ba8b2604ea168b294cf201afcff7a6dce33b7c994b30c0cb7006b78c12fa8 not found: ID does not exist" Dec 02 10:19:08 crc kubenswrapper[4711]: I1202 10:19:08.972554 4711 scope.go:117] "RemoveContainer" containerID="d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.003095 4711 scope.go:117] "RemoveContainer" containerID="5b651b247fff1bae71930156835ee9077f0a88f19807b275ccffa3b0951ea3ca" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.020734 4711 scope.go:117] "RemoveContainer" containerID="662882d49165964bb639c286db0bf8d9037bf236830b1c15b54787c0a19e05ff" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.070561 4711 scope.go:117] "RemoveContainer" containerID="d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.071055 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58\": container with ID starting with d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58 not found: ID does not exist" containerID="d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.071111 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58"} err="failed to get container status \"d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58\": rpc error: code = NotFound desc = could not find container \"d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58\": container with ID starting with d44b5083f8033ca0c3252f4dd2e3b2c295d152b9019d862591a010b52a3d9d58 not found: ID does not exist" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.071147 4711 scope.go:117] "RemoveContainer" containerID="5b651b247fff1bae71930156835ee9077f0a88f19807b275ccffa3b0951ea3ca" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.071440 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b651b247fff1bae71930156835ee9077f0a88f19807b275ccffa3b0951ea3ca\": container with ID starting with 5b651b247fff1bae71930156835ee9077f0a88f19807b275ccffa3b0951ea3ca not found: ID does not exist" containerID="5b651b247fff1bae71930156835ee9077f0a88f19807b275ccffa3b0951ea3ca" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.071472 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b651b247fff1bae71930156835ee9077f0a88f19807b275ccffa3b0951ea3ca"} err="failed to get container status \"5b651b247fff1bae71930156835ee9077f0a88f19807b275ccffa3b0951ea3ca\": rpc error: code = NotFound desc = could not find container \"5b651b247fff1bae71930156835ee9077f0a88f19807b275ccffa3b0951ea3ca\": container with ID starting with 5b651b247fff1bae71930156835ee9077f0a88f19807b275ccffa3b0951ea3ca not found: ID does not exist" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.071495 4711 scope.go:117] "RemoveContainer" containerID="662882d49165964bb639c286db0bf8d9037bf236830b1c15b54787c0a19e05ff" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.071730 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662882d49165964bb639c286db0bf8d9037bf236830b1c15b54787c0a19e05ff\": container with ID starting with 662882d49165964bb639c286db0bf8d9037bf236830b1c15b54787c0a19e05ff not found: ID does not exist" containerID="662882d49165964bb639c286db0bf8d9037bf236830b1c15b54787c0a19e05ff" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.071763 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662882d49165964bb639c286db0bf8d9037bf236830b1c15b54787c0a19e05ff"} err="failed to get container status \"662882d49165964bb639c286db0bf8d9037bf236830b1c15b54787c0a19e05ff\": rpc error: code = NotFound desc = could not find container \"662882d49165964bb639c286db0bf8d9037bf236830b1c15b54787c0a19e05ff\": container with ID starting with 662882d49165964bb639c286db0bf8d9037bf236830b1c15b54787c0a19e05ff not found: ID does not exist" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.071805 4711 scope.go:117] "RemoveContainer" containerID="19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.087661 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c729b20-2f40-43b6-8432-062f8a6cce37" path="/var/lib/kubelet/pods/0c729b20-2f40-43b6-8432-062f8a6cce37/volumes" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.088776 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8230b2-fb50-43c7-8a69-af1d02cce895" path="/var/lib/kubelet/pods/8e8230b2-fb50-43c7-8a69-af1d02cce895/volumes" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.089542 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c13742-8972-4320-8b08-ced2c55156d3" path="/var/lib/kubelet/pods/a3c13742-8972-4320-8b08-ced2c55156d3/volumes" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.090852 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" path="/var/lib/kubelet/pods/d37a3481-62b0-42fd-b6c9-198f0e5aac93/volumes" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.091845 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" path="/var/lib/kubelet/pods/ef385405-9334-4ba2-a7ed-abd9d51cbd5d/volumes" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.101700 4711 scope.go:117] "RemoveContainer" containerID="20e00834afa981dc79ef10514624d4608ba8fefdd64b8d3b3309b6b4aab257bc" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.119797 4711 scope.go:117] "RemoveContainer" containerID="59cc8d68aac28b724bf00f489b3eb117f34096c8ec6dd8e9d531a2a1ca35eb4a" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.136017 4711 scope.go:117] "RemoveContainer" containerID="19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.136587 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7\": container with ID starting with 19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7 not found: ID does not exist" containerID="19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.136829 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7"} err="failed to get container status \"19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7\": rpc error: code = NotFound desc = could not find container \"19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7\": container with ID starting with 19b7df33d7a10cb1d1a47b97fc8078f6210c8963e42cdafcfdbc41cc5f8dfff7 not found: ID does not exist" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.136936 4711 scope.go:117] "RemoveContainer" containerID="20e00834afa981dc79ef10514624d4608ba8fefdd64b8d3b3309b6b4aab257bc" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.137322 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20e00834afa981dc79ef10514624d4608ba8fefdd64b8d3b3309b6b4aab257bc\": container with ID starting with 20e00834afa981dc79ef10514624d4608ba8fefdd64b8d3b3309b6b4aab257bc not found: ID does not exist" containerID="20e00834afa981dc79ef10514624d4608ba8fefdd64b8d3b3309b6b4aab257bc" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.137344 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20e00834afa981dc79ef10514624d4608ba8fefdd64b8d3b3309b6b4aab257bc"} err="failed to get container status \"20e00834afa981dc79ef10514624d4608ba8fefdd64b8d3b3309b6b4aab257bc\": rpc error: code = NotFound desc = could not find container \"20e00834afa981dc79ef10514624d4608ba8fefdd64b8d3b3309b6b4aab257bc\": container with ID starting with 20e00834afa981dc79ef10514624d4608ba8fefdd64b8d3b3309b6b4aab257bc not found: ID does not exist" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.137359 4711 scope.go:117] "RemoveContainer" containerID="59cc8d68aac28b724bf00f489b3eb117f34096c8ec6dd8e9d531a2a1ca35eb4a" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.137673 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59cc8d68aac28b724bf00f489b3eb117f34096c8ec6dd8e9d531a2a1ca35eb4a\": container with ID starting with 59cc8d68aac28b724bf00f489b3eb117f34096c8ec6dd8e9d531a2a1ca35eb4a not found: ID does not exist" containerID="59cc8d68aac28b724bf00f489b3eb117f34096c8ec6dd8e9d531a2a1ca35eb4a" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.137742 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59cc8d68aac28b724bf00f489b3eb117f34096c8ec6dd8e9d531a2a1ca35eb4a"} err="failed to get container status \"59cc8d68aac28b724bf00f489b3eb117f34096c8ec6dd8e9d531a2a1ca35eb4a\": rpc error: code = NotFound desc = could not find container \"59cc8d68aac28b724bf00f489b3eb117f34096c8ec6dd8e9d531a2a1ca35eb4a\": container with ID starting with 59cc8d68aac28b724bf00f489b3eb117f34096c8ec6dd8e9d531a2a1ca35eb4a not found: ID does not exist" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.137787 4711 scope.go:117] "RemoveContainer" containerID="d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.154843 4711 scope.go:117] "RemoveContainer" containerID="bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.179296 4711 scope.go:117] "RemoveContainer" containerID="d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.179997 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142\": container with ID starting with d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142 not found: ID does not exist" containerID="d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.180031 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142"} err="failed to get container status \"d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142\": rpc error: code = NotFound desc = could not find container \"d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142\": container with ID starting with d00ab8b603209a4053e2549e2be6ea3b74ec898842c6931e6587a06a258ec142 not found: ID does not exist" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.180078 4711 scope.go:117] "RemoveContainer" containerID="bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.180448 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768\": container with ID starting with bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768 not found: ID does not exist" containerID="bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.180527 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768"} err="failed to get container status \"bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768\": rpc error: code = NotFound desc = could not find container \"bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768\": container with ID starting with bcf989a558ac0ab0281d87b1f466d4cdcbb21fe5821060ffd4f6b19ea40f0768 not found: ID does not exist" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.458270 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7tn2s"] Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.458722 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8230b2-fb50-43c7-8a69-af1d02cce895" containerName="marketplace-operator" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.458809 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8230b2-fb50-43c7-8a69-af1d02cce895" containerName="marketplace-operator" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.458905 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" containerName="extract-utilities" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.459012 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" containerName="extract-utilities" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.459099 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" containerName="registry-server" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.459170 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" containerName="registry-server" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.459255 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c729b20-2f40-43b6-8432-062f8a6cce37" containerName="extract-content" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.459327 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c729b20-2f40-43b6-8432-062f8a6cce37" containerName="extract-content" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.459486 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c13742-8972-4320-8b08-ced2c55156d3" containerName="extract-content" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.459566 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c13742-8972-4320-8b08-ced2c55156d3" containerName="extract-content" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.459645 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" containerName="registry-server" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.459729 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" containerName="registry-server" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.459803 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c13742-8972-4320-8b08-ced2c55156d3" containerName="registry-server" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.459878 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c13742-8972-4320-8b08-ced2c55156d3" containerName="registry-server" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.459992 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c13742-8972-4320-8b08-ced2c55156d3" containerName="extract-utilities" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.460070 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c13742-8972-4320-8b08-ced2c55156d3" containerName="extract-utilities" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.460153 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8230b2-fb50-43c7-8a69-af1d02cce895" containerName="marketplace-operator" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.460230 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8230b2-fb50-43c7-8a69-af1d02cce895" containerName="marketplace-operator" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.460314 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" containerName="extract-utilities" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.460390 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" containerName="extract-utilities" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.460467 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" containerName="extract-content" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.460545 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" containerName="extract-content" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.460620 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c729b20-2f40-43b6-8432-062f8a6cce37" containerName="registry-server" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.460702 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c729b20-2f40-43b6-8432-062f8a6cce37" containerName="registry-server" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.460778 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c729b20-2f40-43b6-8432-062f8a6cce37" containerName="extract-utilities" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.460848 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c729b20-2f40-43b6-8432-062f8a6cce37" containerName="extract-utilities" Dec 02 10:19:09 crc kubenswrapper[4711]: E1202 10:19:09.460920 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" containerName="extract-content" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.461008 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" containerName="extract-content" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.461200 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="d37a3481-62b0-42fd-b6c9-198f0e5aac93" containerName="registry-server" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.461291 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef385405-9334-4ba2-a7ed-abd9d51cbd5d" containerName="registry-server" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.461372 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8230b2-fb50-43c7-8a69-af1d02cce895" containerName="marketplace-operator" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.461455 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c13742-8972-4320-8b08-ced2c55156d3" containerName="registry-server" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.461529 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c729b20-2f40-43b6-8432-062f8a6cce37" containerName="registry-server" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.461608 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8230b2-fb50-43c7-8a69-af1d02cce895" containerName="marketplace-operator" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.462549 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.465346 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.471288 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7tn2s"] Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.536589 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48f9\" (UniqueName: \"kubernetes.io/projected/61cb2319-0773-4cab-9057-ea1631ad72b2-kube-api-access-w48f9\") pod \"certified-operators-7tn2s\" (UID: \"61cb2319-0773-4cab-9057-ea1631ad72b2\") " pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.536896 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61cb2319-0773-4cab-9057-ea1631ad72b2-utilities\") pod \"certified-operators-7tn2s\" (UID: \"61cb2319-0773-4cab-9057-ea1631ad72b2\") " pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.537103 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61cb2319-0773-4cab-9057-ea1631ad72b2-catalog-content\") pod \"certified-operators-7tn2s\" (UID: \"61cb2319-0773-4cab-9057-ea1631ad72b2\") " pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.638633 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61cb2319-0773-4cab-9057-ea1631ad72b2-utilities\") pod \"certified-operators-7tn2s\" (UID: \"61cb2319-0773-4cab-9057-ea1631ad72b2\") " pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.638704 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61cb2319-0773-4cab-9057-ea1631ad72b2-catalog-content\") pod \"certified-operators-7tn2s\" (UID: \"61cb2319-0773-4cab-9057-ea1631ad72b2\") " pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.638787 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w48f9\" (UniqueName: \"kubernetes.io/projected/61cb2319-0773-4cab-9057-ea1631ad72b2-kube-api-access-w48f9\") pod \"certified-operators-7tn2s\" (UID: \"61cb2319-0773-4cab-9057-ea1631ad72b2\") " pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.639398 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61cb2319-0773-4cab-9057-ea1631ad72b2-utilities\") pod \"certified-operators-7tn2s\" (UID: \"61cb2319-0773-4cab-9057-ea1631ad72b2\") " pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.639505 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61cb2319-0773-4cab-9057-ea1631ad72b2-catalog-content\") pod \"certified-operators-7tn2s\" (UID: \"61cb2319-0773-4cab-9057-ea1631ad72b2\") " pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.659982 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w48f9\" (UniqueName: \"kubernetes.io/projected/61cb2319-0773-4cab-9057-ea1631ad72b2-kube-api-access-w48f9\") pod \"certified-operators-7tn2s\" (UID: \"61cb2319-0773-4cab-9057-ea1631ad72b2\") " pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.778604 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.823194 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" event={"ID":"bc27e106-dc06-4326-9cc4-99ca9b5206bb","Type":"ContainerStarted","Data":"1a2adfb55cc64813c83b5856ae69be1a44cfb40ba4a8902070d82cb0bfb3bb69"} Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.823265 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" event={"ID":"bc27e106-dc06-4326-9cc4-99ca9b5206bb","Type":"ContainerStarted","Data":"47daa4d5d6aaef16c39012113edff810dfa0db556189724216b261483ec33cb5"} Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.824074 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.828290 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" Dec 02 10:19:09 crc kubenswrapper[4711]: I1202 10:19:09.843589 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-z4fwd" podStartSLOduration=2.843566277 podStartE2EDuration="2.843566277s" podCreationTimestamp="2025-12-02 10:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:19:09.841716057 +0000 UTC m=+339.551082514" watchObservedRunningTime="2025-12-02 10:19:09.843566277 +0000 UTC m=+339.552932744" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.258976 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7tn2s"] Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.453635 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g8w7v"] Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.457085 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.459588 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.467076 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8w7v"] Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.559394 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdws9\" (UniqueName: \"kubernetes.io/projected/e7d69893-d9ab-42f7-a505-472548cbe19d-kube-api-access-vdws9\") pod \"redhat-marketplace-g8w7v\" (UID: \"e7d69893-d9ab-42f7-a505-472548cbe19d\") " pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.559492 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d69893-d9ab-42f7-a505-472548cbe19d-utilities\") pod \"redhat-marketplace-g8w7v\" (UID: \"e7d69893-d9ab-42f7-a505-472548cbe19d\") " pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.559521 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d69893-d9ab-42f7-a505-472548cbe19d-catalog-content\") pod \"redhat-marketplace-g8w7v\" (UID: \"e7d69893-d9ab-42f7-a505-472548cbe19d\") " pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.660609 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdws9\" (UniqueName: \"kubernetes.io/projected/e7d69893-d9ab-42f7-a505-472548cbe19d-kube-api-access-vdws9\") pod \"redhat-marketplace-g8w7v\" (UID: \"e7d69893-d9ab-42f7-a505-472548cbe19d\") " pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.660684 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d69893-d9ab-42f7-a505-472548cbe19d-utilities\") pod \"redhat-marketplace-g8w7v\" (UID: \"e7d69893-d9ab-42f7-a505-472548cbe19d\") " pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.660708 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d69893-d9ab-42f7-a505-472548cbe19d-catalog-content\") pod \"redhat-marketplace-g8w7v\" (UID: \"e7d69893-d9ab-42f7-a505-472548cbe19d\") " pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.661202 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d69893-d9ab-42f7-a505-472548cbe19d-catalog-content\") pod \"redhat-marketplace-g8w7v\" (UID: \"e7d69893-d9ab-42f7-a505-472548cbe19d\") " pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.661239 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d69893-d9ab-42f7-a505-472548cbe19d-utilities\") pod \"redhat-marketplace-g8w7v\" (UID: \"e7d69893-d9ab-42f7-a505-472548cbe19d\") " pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.680616 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdws9\" (UniqueName: \"kubernetes.io/projected/e7d69893-d9ab-42f7-a505-472548cbe19d-kube-api-access-vdws9\") pod \"redhat-marketplace-g8w7v\" (UID: \"e7d69893-d9ab-42f7-a505-472548cbe19d\") " pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.838038 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.847486 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tn2s" event={"ID":"61cb2319-0773-4cab-9057-ea1631ad72b2","Type":"ContainerDied","Data":"ae8a8ee3b094d976d0e70a3524658f2bcc473d260dfd2cfbea466610b3b47a76"} Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.847368 4711 generic.go:334] "Generic (PLEG): container finished" podID="61cb2319-0773-4cab-9057-ea1631ad72b2" containerID="ae8a8ee3b094d976d0e70a3524658f2bcc473d260dfd2cfbea466610b3b47a76" exitCode=0 Dec 02 10:19:10 crc kubenswrapper[4711]: I1202 10:19:10.847649 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tn2s" event={"ID":"61cb2319-0773-4cab-9057-ea1631ad72b2","Type":"ContainerStarted","Data":"cb92e1cd66a0881cf219a12046d6aeb8e2f1b1ab6fe22a7a2b6ed639e0e9cdb1"} Dec 02 10:19:11 crc kubenswrapper[4711]: I1202 10:19:11.256940 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8w7v"] Dec 02 10:19:11 crc kubenswrapper[4711]: W1202 10:19:11.265648 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7d69893_d9ab_42f7_a505_472548cbe19d.slice/crio-b1fdd0ea4a05884438262db0cbe9c91aca6eca601bab3c7d140ae48461d8fb29 WatchSource:0}: Error finding container b1fdd0ea4a05884438262db0cbe9c91aca6eca601bab3c7d140ae48461d8fb29: Status 404 returned error can't find the container with id b1fdd0ea4a05884438262db0cbe9c91aca6eca601bab3c7d140ae48461d8fb29 Dec 02 10:19:11 crc kubenswrapper[4711]: I1202 10:19:11.855396 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tln95"] Dec 02 10:19:11 crc kubenswrapper[4711]: I1202 10:19:11.857305 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:11 crc kubenswrapper[4711]: I1202 10:19:11.859469 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 10:19:11 crc kubenswrapper[4711]: I1202 10:19:11.859831 4711 generic.go:334] "Generic (PLEG): container finished" podID="e7d69893-d9ab-42f7-a505-472548cbe19d" containerID="509946efa21f900f8216af1604bf0fcf0d77c4b2a6ba0b4e96f0672d317f322b" exitCode=0 Dec 02 10:19:11 crc kubenswrapper[4711]: I1202 10:19:11.860113 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8w7v" event={"ID":"e7d69893-d9ab-42f7-a505-472548cbe19d","Type":"ContainerDied","Data":"509946efa21f900f8216af1604bf0fcf0d77c4b2a6ba0b4e96f0672d317f322b"} Dec 02 10:19:11 crc kubenswrapper[4711]: I1202 10:19:11.860202 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8w7v" event={"ID":"e7d69893-d9ab-42f7-a505-472548cbe19d","Type":"ContainerStarted","Data":"b1fdd0ea4a05884438262db0cbe9c91aca6eca601bab3c7d140ae48461d8fb29"} Dec 02 10:19:11 crc kubenswrapper[4711]: I1202 10:19:11.863442 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tln95"] Dec 02 10:19:11 crc kubenswrapper[4711]: I1202 10:19:11.976466 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cb272a-41a6-4371-b3c8-fe7d6e661ba2-catalog-content\") pod \"redhat-operators-tln95\" (UID: \"48cb272a-41a6-4371-b3c8-fe7d6e661ba2\") " pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:11 crc kubenswrapper[4711]: I1202 10:19:11.976557 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cb272a-41a6-4371-b3c8-fe7d6e661ba2-utilities\") pod \"redhat-operators-tln95\" (UID: \"48cb272a-41a6-4371-b3c8-fe7d6e661ba2\") " pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:11 crc kubenswrapper[4711]: I1202 10:19:11.976608 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2797x\" (UniqueName: \"kubernetes.io/projected/48cb272a-41a6-4371-b3c8-fe7d6e661ba2-kube-api-access-2797x\") pod \"redhat-operators-tln95\" (UID: \"48cb272a-41a6-4371-b3c8-fe7d6e661ba2\") " pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.077766 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cb272a-41a6-4371-b3c8-fe7d6e661ba2-catalog-content\") pod \"redhat-operators-tln95\" (UID: \"48cb272a-41a6-4371-b3c8-fe7d6e661ba2\") " pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.078373 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cb272a-41a6-4371-b3c8-fe7d6e661ba2-utilities\") pod \"redhat-operators-tln95\" (UID: \"48cb272a-41a6-4371-b3c8-fe7d6e661ba2\") " pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.078426 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cb272a-41a6-4371-b3c8-fe7d6e661ba2-catalog-content\") pod \"redhat-operators-tln95\" (UID: \"48cb272a-41a6-4371-b3c8-fe7d6e661ba2\") " pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.078461 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2797x\" (UniqueName: \"kubernetes.io/projected/48cb272a-41a6-4371-b3c8-fe7d6e661ba2-kube-api-access-2797x\") pod \"redhat-operators-tln95\" (UID: \"48cb272a-41a6-4371-b3c8-fe7d6e661ba2\") " pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.078677 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cb272a-41a6-4371-b3c8-fe7d6e661ba2-utilities\") pod \"redhat-operators-tln95\" (UID: \"48cb272a-41a6-4371-b3c8-fe7d6e661ba2\") " pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.102138 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2797x\" (UniqueName: \"kubernetes.io/projected/48cb272a-41a6-4371-b3c8-fe7d6e661ba2-kube-api-access-2797x\") pod \"redhat-operators-tln95\" (UID: \"48cb272a-41a6-4371-b3c8-fe7d6e661ba2\") " pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.178069 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.628516 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tln95"] Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.852491 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bl9gw"] Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.853836 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.855553 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.862865 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bl9gw"] Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.887272 4711 generic.go:334] "Generic (PLEG): container finished" podID="61cb2319-0773-4cab-9057-ea1631ad72b2" containerID="09e5d059efb69efe315a0657f0bf7d9d3d31f92de9bd9b73d520a6d2f08d057a" exitCode=0 Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.887460 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tn2s" event={"ID":"61cb2319-0773-4cab-9057-ea1631ad72b2","Type":"ContainerDied","Data":"09e5d059efb69efe315a0657f0bf7d9d3d31f92de9bd9b73d520a6d2f08d057a"} Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.889291 4711 generic.go:334] "Generic (PLEG): container finished" podID="48cb272a-41a6-4371-b3c8-fe7d6e661ba2" containerID="f8e75f1c9845d5d1463c3501b25dbccbb04d30ab7cabdce167609480e68a6f48" exitCode=0 Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.889386 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tln95" event={"ID":"48cb272a-41a6-4371-b3c8-fe7d6e661ba2","Type":"ContainerDied","Data":"f8e75f1c9845d5d1463c3501b25dbccbb04d30ab7cabdce167609480e68a6f48"} Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.889408 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tln95" event={"ID":"48cb272a-41a6-4371-b3c8-fe7d6e661ba2","Type":"ContainerStarted","Data":"b39b2923998f1270fefe4149716f9c3dc96e73894eb80e66be184e3481104714"} Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.988643 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-catalog-content\") pod \"community-operators-bl9gw\" (UID: \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\") " pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.988753 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgshq\" (UniqueName: \"kubernetes.io/projected/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-kube-api-access-hgshq\") pod \"community-operators-bl9gw\" (UID: \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\") " pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:12 crc kubenswrapper[4711]: I1202 10:19:12.988819 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-utilities\") pod \"community-operators-bl9gw\" (UID: \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\") " pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.090201 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgshq\" (UniqueName: \"kubernetes.io/projected/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-kube-api-access-hgshq\") pod \"community-operators-bl9gw\" (UID: \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\") " pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.090277 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-utilities\") pod \"community-operators-bl9gw\" (UID: \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\") " pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.090390 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-catalog-content\") pod \"community-operators-bl9gw\" (UID: \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\") " pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.090917 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-catalog-content\") pod \"community-operators-bl9gw\" (UID: \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\") " pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.091292 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-utilities\") pod \"community-operators-bl9gw\" (UID: \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\") " pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.112848 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgshq\" (UniqueName: \"kubernetes.io/projected/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-kube-api-access-hgshq\") pod \"community-operators-bl9gw\" (UID: \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\") " pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.191580 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.628645 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bl9gw"] Dec 02 10:19:13 crc kubenswrapper[4711]: W1202 10:19:13.635806 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c2e1ec8_8c64_4bf3_a577_0db5a91328de.slice/crio-588a08df06f4c02ba35d262994cab287e0975522e1539427dae2a94b99e949d5 WatchSource:0}: Error finding container 588a08df06f4c02ba35d262994cab287e0975522e1539427dae2a94b99e949d5: Status 404 returned error can't find the container with id 588a08df06f4c02ba35d262994cab287e0975522e1539427dae2a94b99e949d5 Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.897670 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tn2s" event={"ID":"61cb2319-0773-4cab-9057-ea1631ad72b2","Type":"ContainerStarted","Data":"60690e095e5d6d56280168ac5d2cd6c19f95384c6cc253a84a761776c9e64403"} Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.899315 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tln95" event={"ID":"48cb272a-41a6-4371-b3c8-fe7d6e661ba2","Type":"ContainerStarted","Data":"eb4c8c883bd95b59478745c1288cce9798eb195376d36967dc9ee81e82b55368"} Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.902736 4711 generic.go:334] "Generic (PLEG): container finished" podID="9c2e1ec8-8c64-4bf3-a577-0db5a91328de" containerID="2dbc144cf949f150c20ad7959461a32f0dd37e56e59cba6346fc54802d966bab" exitCode=0 Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.902844 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bl9gw" event={"ID":"9c2e1ec8-8c64-4bf3-a577-0db5a91328de","Type":"ContainerDied","Data":"2dbc144cf949f150c20ad7959461a32f0dd37e56e59cba6346fc54802d966bab"} Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.902873 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bl9gw" event={"ID":"9c2e1ec8-8c64-4bf3-a577-0db5a91328de","Type":"ContainerStarted","Data":"588a08df06f4c02ba35d262994cab287e0975522e1539427dae2a94b99e949d5"} Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.905364 4711 generic.go:334] "Generic (PLEG): container finished" podID="e7d69893-d9ab-42f7-a505-472548cbe19d" containerID="23a641e439c2370df862284db9673dc2bacc5908acf9f59bf270da5f5474c421" exitCode=0 Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.905399 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8w7v" event={"ID":"e7d69893-d9ab-42f7-a505-472548cbe19d","Type":"ContainerDied","Data":"23a641e439c2370df862284db9673dc2bacc5908acf9f59bf270da5f5474c421"} Dec 02 10:19:13 crc kubenswrapper[4711]: I1202 10:19:13.917879 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7tn2s" podStartSLOduration=2.397868121 podStartE2EDuration="4.917864058s" podCreationTimestamp="2025-12-02 10:19:09 +0000 UTC" firstStartedPulling="2025-12-02 10:19:10.850996218 +0000 UTC m=+340.560362675" lastFinishedPulling="2025-12-02 10:19:13.370992165 +0000 UTC m=+343.080358612" observedRunningTime="2025-12-02 10:19:13.916409909 +0000 UTC m=+343.625776356" watchObservedRunningTime="2025-12-02 10:19:13.917864058 +0000 UTC m=+343.627230505" Dec 02 10:19:14 crc kubenswrapper[4711]: I1202 10:19:14.916719 4711 generic.go:334] "Generic (PLEG): container finished" podID="48cb272a-41a6-4371-b3c8-fe7d6e661ba2" containerID="eb4c8c883bd95b59478745c1288cce9798eb195376d36967dc9ee81e82b55368" exitCode=0 Dec 02 10:19:14 crc kubenswrapper[4711]: I1202 10:19:14.917009 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tln95" event={"ID":"48cb272a-41a6-4371-b3c8-fe7d6e661ba2","Type":"ContainerDied","Data":"eb4c8c883bd95b59478745c1288cce9798eb195376d36967dc9ee81e82b55368"} Dec 02 10:19:14 crc kubenswrapper[4711]: I1202 10:19:14.926391 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8w7v" event={"ID":"e7d69893-d9ab-42f7-a505-472548cbe19d","Type":"ContainerStarted","Data":"0c2d1c1b4ca5134404829a8d4117f735999211bc1687a3030b4880a674a5dc33"} Dec 02 10:19:14 crc kubenswrapper[4711]: I1202 10:19:14.957834 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g8w7v" podStartSLOduration=2.493979416 podStartE2EDuration="4.957814923s" podCreationTimestamp="2025-12-02 10:19:10 +0000 UTC" firstStartedPulling="2025-12-02 10:19:11.861670617 +0000 UTC m=+341.571037074" lastFinishedPulling="2025-12-02 10:19:14.325506134 +0000 UTC m=+344.034872581" observedRunningTime="2025-12-02 10:19:14.95582676 +0000 UTC m=+344.665193227" watchObservedRunningTime="2025-12-02 10:19:14.957814923 +0000 UTC m=+344.667181370" Dec 02 10:19:15 crc kubenswrapper[4711]: I1202 10:19:15.934043 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tln95" event={"ID":"48cb272a-41a6-4371-b3c8-fe7d6e661ba2","Type":"ContainerStarted","Data":"043e8bc4c1c9acde95f158cdb31a820e4eb78e8975152015c4fe0d50d42613e6"} Dec 02 10:19:15 crc kubenswrapper[4711]: I1202 10:19:15.935873 4711 generic.go:334] "Generic (PLEG): container finished" podID="9c2e1ec8-8c64-4bf3-a577-0db5a91328de" containerID="c6e1270a0b8c2e893327ea0b96ac0789d31061b053eb8adca54504916c4919b9" exitCode=0 Dec 02 10:19:15 crc kubenswrapper[4711]: I1202 10:19:15.937299 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bl9gw" event={"ID":"9c2e1ec8-8c64-4bf3-a577-0db5a91328de","Type":"ContainerDied","Data":"c6e1270a0b8c2e893327ea0b96ac0789d31061b053eb8adca54504916c4919b9"} Dec 02 10:19:15 crc kubenswrapper[4711]: I1202 10:19:15.965192 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tln95" podStartSLOduration=2.4368255420000002 podStartE2EDuration="4.965169811s" podCreationTimestamp="2025-12-02 10:19:11 +0000 UTC" firstStartedPulling="2025-12-02 10:19:12.892160957 +0000 UTC m=+342.601527404" lastFinishedPulling="2025-12-02 10:19:15.420505226 +0000 UTC m=+345.129871673" observedRunningTime="2025-12-02 10:19:15.958175314 +0000 UTC m=+345.667541761" watchObservedRunningTime="2025-12-02 10:19:15.965169811 +0000 UTC m=+345.674536248" Dec 02 10:19:17 crc kubenswrapper[4711]: I1202 10:19:17.952708 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bl9gw" event={"ID":"9c2e1ec8-8c64-4bf3-a577-0db5a91328de","Type":"ContainerStarted","Data":"b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb"} Dec 02 10:19:17 crc kubenswrapper[4711]: I1202 10:19:17.974120 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bl9gw" podStartSLOduration=3.183112873 podStartE2EDuration="5.974066734s" podCreationTimestamp="2025-12-02 10:19:12 +0000 UTC" firstStartedPulling="2025-12-02 10:19:13.905915578 +0000 UTC m=+343.615282025" lastFinishedPulling="2025-12-02 10:19:16.696869429 +0000 UTC m=+346.406235886" observedRunningTime="2025-12-02 10:19:17.97172888 +0000 UTC m=+347.681095337" watchObservedRunningTime="2025-12-02 10:19:17.974066734 +0000 UTC m=+347.683433181" Dec 02 10:19:19 crc kubenswrapper[4711]: I1202 10:19:19.780129 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:19 crc kubenswrapper[4711]: I1202 10:19:19.780183 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:19 crc kubenswrapper[4711]: I1202 10:19:19.823932 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:20 crc kubenswrapper[4711]: I1202 10:19:20.001425 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7tn2s" Dec 02 10:19:20 crc kubenswrapper[4711]: I1202 10:19:20.838601 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:20 crc kubenswrapper[4711]: I1202 10:19:20.840261 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:20 crc kubenswrapper[4711]: I1202 10:19:20.884506 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:21 crc kubenswrapper[4711]: I1202 10:19:21.002712 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g8w7v" Dec 02 10:19:22 crc kubenswrapper[4711]: I1202 10:19:22.179163 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:22 crc kubenswrapper[4711]: I1202 10:19:22.179251 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:22 crc kubenswrapper[4711]: I1202 10:19:22.222328 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:22 crc kubenswrapper[4711]: I1202 10:19:22.586149 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:19:22 crc kubenswrapper[4711]: I1202 10:19:22.586322 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:19:23 crc kubenswrapper[4711]: I1202 10:19:23.023825 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tln95" Dec 02 10:19:23 crc kubenswrapper[4711]: I1202 10:19:23.192360 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:23 crc kubenswrapper[4711]: I1202 10:19:23.192675 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:23 crc kubenswrapper[4711]: I1202 10:19:23.229590 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:24 crc kubenswrapper[4711]: I1202 10:19:24.035303 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.485911 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78855b4d5f-4dtsh"] Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.487807 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" podUID="b2bd22d4-d56f-442a-96ce-6f3246aed105" containerName="controller-manager" containerID="cri-o://b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0" gracePeriod=30 Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.893532 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.973413 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2bd22d4-d56f-442a-96ce-6f3246aed105-serving-cert\") pod \"b2bd22d4-d56f-442a-96ce-6f3246aed105\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.973470 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-proxy-ca-bundles\") pod \"b2bd22d4-d56f-442a-96ce-6f3246aed105\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.973540 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtwrx\" (UniqueName: \"kubernetes.io/projected/b2bd22d4-d56f-442a-96ce-6f3246aed105-kube-api-access-jtwrx\") pod \"b2bd22d4-d56f-442a-96ce-6f3246aed105\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.973590 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-config\") pod \"b2bd22d4-d56f-442a-96ce-6f3246aed105\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.973612 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-client-ca\") pod \"b2bd22d4-d56f-442a-96ce-6f3246aed105\" (UID: \"b2bd22d4-d56f-442a-96ce-6f3246aed105\") " Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.974310 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-client-ca" (OuterVolumeSpecName: "client-ca") pod "b2bd22d4-d56f-442a-96ce-6f3246aed105" (UID: "b2bd22d4-d56f-442a-96ce-6f3246aed105"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.974476 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-config" (OuterVolumeSpecName: "config") pod "b2bd22d4-d56f-442a-96ce-6f3246aed105" (UID: "b2bd22d4-d56f-442a-96ce-6f3246aed105"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.974586 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b2bd22d4-d56f-442a-96ce-6f3246aed105" (UID: "b2bd22d4-d56f-442a-96ce-6f3246aed105"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.979662 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2bd22d4-d56f-442a-96ce-6f3246aed105-kube-api-access-jtwrx" (OuterVolumeSpecName: "kube-api-access-jtwrx") pod "b2bd22d4-d56f-442a-96ce-6f3246aed105" (UID: "b2bd22d4-d56f-442a-96ce-6f3246aed105"). InnerVolumeSpecName "kube-api-access-jtwrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:19:40 crc kubenswrapper[4711]: I1202 10:19:40.982087 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bd22d4-d56f-442a-96ce-6f3246aed105-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b2bd22d4-d56f-442a-96ce-6f3246aed105" (UID: "b2bd22d4-d56f-442a-96ce-6f3246aed105"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.075193 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.075234 4711 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.075244 4711 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2bd22d4-d56f-442a-96ce-6f3246aed105-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.075252 4711 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2bd22d4-d56f-442a-96ce-6f3246aed105-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.075265 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtwrx\" (UniqueName: \"kubernetes.io/projected/b2bd22d4-d56f-442a-96ce-6f3246aed105-kube-api-access-jtwrx\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.086513 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.086536 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" event={"ID":"b2bd22d4-d56f-442a-96ce-6f3246aed105","Type":"ContainerDied","Data":"b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0"} Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.086595 4711 scope.go:117] "RemoveContainer" containerID="b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.086501 4711 generic.go:334] "Generic (PLEG): container finished" podID="b2bd22d4-d56f-442a-96ce-6f3246aed105" containerID="b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0" exitCode=0 Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.086669 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78855b4d5f-4dtsh" event={"ID":"b2bd22d4-d56f-442a-96ce-6f3246aed105","Type":"ContainerDied","Data":"e3e9578dbe260d5ae3bacc789fc4ec0fdd438044356fc5d5d07aba2cb7933c96"} Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.108680 4711 scope.go:117] "RemoveContainer" containerID="b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0" Dec 02 10:19:41 crc kubenswrapper[4711]: E1202 10:19:41.109201 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0\": container with ID starting with b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0 not found: ID does not exist" containerID="b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.109269 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0"} err="failed to get container status \"b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0\": rpc error: code = NotFound desc = could not find container \"b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0\": container with ID starting with b8ffb1314201269ade6e7de7acad16d33f9f9b92e89198a73da19163eb05b6e0 not found: ID does not exist" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.114983 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78855b4d5f-4dtsh"] Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.118609 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-78855b4d5f-4dtsh"] Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.603298 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6988db4df6-z6wtz"] Dec 02 10:19:41 crc kubenswrapper[4711]: E1202 10:19:41.603629 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bd22d4-d56f-442a-96ce-6f3246aed105" containerName="controller-manager" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.603673 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bd22d4-d56f-442a-96ce-6f3246aed105" containerName="controller-manager" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.603784 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bd22d4-d56f-442a-96ce-6f3246aed105" containerName="controller-manager" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.604244 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.612153 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.613237 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.613713 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.613831 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.614033 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.615207 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.622857 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.625863 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6988db4df6-z6wtz"] Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.683339 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5270d25c-6893-4a0d-94eb-03641c75f117-config\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.683430 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7kqf\" (UniqueName: \"kubernetes.io/projected/5270d25c-6893-4a0d-94eb-03641c75f117-kube-api-access-q7kqf\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.683456 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5270d25c-6893-4a0d-94eb-03641c75f117-client-ca\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.683668 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5270d25c-6893-4a0d-94eb-03641c75f117-serving-cert\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.683721 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5270d25c-6893-4a0d-94eb-03641c75f117-proxy-ca-bundles\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.785189 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7kqf\" (UniqueName: \"kubernetes.io/projected/5270d25c-6893-4a0d-94eb-03641c75f117-kube-api-access-q7kqf\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.785250 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5270d25c-6893-4a0d-94eb-03641c75f117-client-ca\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.785306 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5270d25c-6893-4a0d-94eb-03641c75f117-serving-cert\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.785326 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5270d25c-6893-4a0d-94eb-03641c75f117-proxy-ca-bundles\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.785350 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5270d25c-6893-4a0d-94eb-03641c75f117-config\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.786620 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5270d25c-6893-4a0d-94eb-03641c75f117-client-ca\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.786941 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5270d25c-6893-4a0d-94eb-03641c75f117-config\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.787429 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5270d25c-6893-4a0d-94eb-03641c75f117-proxy-ca-bundles\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.791268 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5270d25c-6893-4a0d-94eb-03641c75f117-serving-cert\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.804081 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7kqf\" (UniqueName: \"kubernetes.io/projected/5270d25c-6893-4a0d-94eb-03641c75f117-kube-api-access-q7kqf\") pod \"controller-manager-6988db4df6-z6wtz\" (UID: \"5270d25c-6893-4a0d-94eb-03641c75f117\") " pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:41 crc kubenswrapper[4711]: I1202 10:19:41.937759 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:42 crc kubenswrapper[4711]: I1202 10:19:42.413561 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6988db4df6-z6wtz"] Dec 02 10:19:43 crc kubenswrapper[4711]: I1202 10:19:43.086564 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2bd22d4-d56f-442a-96ce-6f3246aed105" path="/var/lib/kubelet/pods/b2bd22d4-d56f-442a-96ce-6f3246aed105/volumes" Dec 02 10:19:43 crc kubenswrapper[4711]: I1202 10:19:43.111986 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" event={"ID":"5270d25c-6893-4a0d-94eb-03641c75f117","Type":"ContainerStarted","Data":"0b9cbb9818c9f4010922039031d38113c0ed5c0ff83f2d370fc57a4c0c243f7d"} Dec 02 10:19:43 crc kubenswrapper[4711]: I1202 10:19:43.112042 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" event={"ID":"5270d25c-6893-4a0d-94eb-03641c75f117","Type":"ContainerStarted","Data":"c6ae55cc2c003206bb91a0d50779d3601988062c40b1dad4498ca24e1ce68529"} Dec 02 10:19:43 crc kubenswrapper[4711]: I1202 10:19:43.112337 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:43 crc kubenswrapper[4711]: I1202 10:19:43.116825 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" Dec 02 10:19:43 crc kubenswrapper[4711]: I1202 10:19:43.139312 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6988db4df6-z6wtz" podStartSLOduration=3.13929096 podStartE2EDuration="3.13929096s" podCreationTimestamp="2025-12-02 10:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:19:43.134743678 +0000 UTC m=+372.844110115" watchObservedRunningTime="2025-12-02 10:19:43.13929096 +0000 UTC m=+372.848657407" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.768856 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kxz5z"] Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.769977 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.782414 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kxz5z"] Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.861512 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/22037a7c-cd0c-494e-963f-7e84e9db4dcd-registry-certificates\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.861596 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/22037a7c-cd0c-494e-963f-7e84e9db4dcd-registry-tls\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.861629 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/22037a7c-cd0c-494e-963f-7e84e9db4dcd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.861683 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg7pz\" (UniqueName: \"kubernetes.io/projected/22037a7c-cd0c-494e-963f-7e84e9db4dcd-kube-api-access-cg7pz\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.861730 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.861762 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22037a7c-cd0c-494e-963f-7e84e9db4dcd-trusted-ca\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.861790 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/22037a7c-cd0c-494e-963f-7e84e9db4dcd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.861808 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22037a7c-cd0c-494e-963f-7e84e9db4dcd-bound-sa-token\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.881501 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.963180 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/22037a7c-cd0c-494e-963f-7e84e9db4dcd-registry-certificates\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.963343 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/22037a7c-cd0c-494e-963f-7e84e9db4dcd-registry-tls\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.963403 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/22037a7c-cd0c-494e-963f-7e84e9db4dcd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.963441 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg7pz\" (UniqueName: \"kubernetes.io/projected/22037a7c-cd0c-494e-963f-7e84e9db4dcd-kube-api-access-cg7pz\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.963503 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22037a7c-cd0c-494e-963f-7e84e9db4dcd-trusted-ca\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.963552 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/22037a7c-cd0c-494e-963f-7e84e9db4dcd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.963587 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22037a7c-cd0c-494e-963f-7e84e9db4dcd-bound-sa-token\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.965076 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/22037a7c-cd0c-494e-963f-7e84e9db4dcd-registry-certificates\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.965452 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22037a7c-cd0c-494e-963f-7e84e9db4dcd-trusted-ca\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.965697 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/22037a7c-cd0c-494e-963f-7e84e9db4dcd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.971997 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/22037a7c-cd0c-494e-963f-7e84e9db4dcd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.972196 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/22037a7c-cd0c-494e-963f-7e84e9db4dcd-registry-tls\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.984005 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22037a7c-cd0c-494e-963f-7e84e9db4dcd-bound-sa-token\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:45 crc kubenswrapper[4711]: I1202 10:19:45.986779 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg7pz\" (UniqueName: \"kubernetes.io/projected/22037a7c-cd0c-494e-963f-7e84e9db4dcd-kube-api-access-cg7pz\") pod \"image-registry-66df7c8f76-kxz5z\" (UID: \"22037a7c-cd0c-494e-963f-7e84e9db4dcd\") " pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:46 crc kubenswrapper[4711]: I1202 10:19:46.126625 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:46 crc kubenswrapper[4711]: I1202 10:19:46.544481 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kxz5z"] Dec 02 10:19:46 crc kubenswrapper[4711]: W1202 10:19:46.554293 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22037a7c_cd0c_494e_963f_7e84e9db4dcd.slice/crio-b0ee08a4177d3d51a59173c12a2d07197a72f50ba99204678d14ff4e96dd49f3 WatchSource:0}: Error finding container b0ee08a4177d3d51a59173c12a2d07197a72f50ba99204678d14ff4e96dd49f3: Status 404 returned error can't find the container with id b0ee08a4177d3d51a59173c12a2d07197a72f50ba99204678d14ff4e96dd49f3 Dec 02 10:19:47 crc kubenswrapper[4711]: I1202 10:19:47.136192 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" event={"ID":"22037a7c-cd0c-494e-963f-7e84e9db4dcd","Type":"ContainerStarted","Data":"5b2bb10106dce9d30961342bc83f8867ac2883837b739543b142988b30d9e527"} Dec 02 10:19:47 crc kubenswrapper[4711]: I1202 10:19:47.136243 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" event={"ID":"22037a7c-cd0c-494e-963f-7e84e9db4dcd","Type":"ContainerStarted","Data":"b0ee08a4177d3d51a59173c12a2d07197a72f50ba99204678d14ff4e96dd49f3"} Dec 02 10:19:47 crc kubenswrapper[4711]: I1202 10:19:47.137216 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:19:47 crc kubenswrapper[4711]: I1202 10:19:47.164722 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" podStartSLOduration=2.164693557 podStartE2EDuration="2.164693557s" podCreationTimestamp="2025-12-02 10:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:19:47.161974045 +0000 UTC m=+376.871340502" watchObservedRunningTime="2025-12-02 10:19:47.164693557 +0000 UTC m=+376.874060034" Dec 02 10:19:52 crc kubenswrapper[4711]: I1202 10:19:52.586796 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:19:52 crc kubenswrapper[4711]: I1202 10:19:52.588740 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:20:06 crc kubenswrapper[4711]: I1202 10:20:06.133077 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-kxz5z" Dec 02 10:20:06 crc kubenswrapper[4711]: I1202 10:20:06.203251 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jc7xv"] Dec 02 10:20:22 crc kubenswrapper[4711]: I1202 10:20:22.585809 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:20:22 crc kubenswrapper[4711]: I1202 10:20:22.586204 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:20:22 crc kubenswrapper[4711]: I1202 10:20:22.586256 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:20:22 crc kubenswrapper[4711]: I1202 10:20:22.586985 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e120b0a0cbb6d1edafb4930cf20f647e52a7929050bec77e5ac0b462823f904"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:20:22 crc kubenswrapper[4711]: I1202 10:20:22.587079 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://1e120b0a0cbb6d1edafb4930cf20f647e52a7929050bec77e5ac0b462823f904" gracePeriod=600 Dec 02 10:20:23 crc kubenswrapper[4711]: I1202 10:20:23.363209 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="1e120b0a0cbb6d1edafb4930cf20f647e52a7929050bec77e5ac0b462823f904" exitCode=0 Dec 02 10:20:23 crc kubenswrapper[4711]: I1202 10:20:23.363259 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"1e120b0a0cbb6d1edafb4930cf20f647e52a7929050bec77e5ac0b462823f904"} Dec 02 10:20:23 crc kubenswrapper[4711]: I1202 10:20:23.364102 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"a8486a79de57fb21bfeb54f206501206d2dc4aa2ed4f085965d4cb08f9a3874d"} Dec 02 10:20:23 crc kubenswrapper[4711]: I1202 10:20:23.364188 4711 scope.go:117] "RemoveContainer" containerID="64d0eff1b4c4c00c9664fa8cdedd2db08b76af0e7ac7a8b69b5ed9cc8def771f" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.259686 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" podUID="d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" containerName="registry" containerID="cri-o://60ba3279eb3a9ca09f7c929f727814921a6570dbe53e2ff6c2621c78bf414567" gracePeriod=30 Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.425543 4711 generic.go:334] "Generic (PLEG): container finished" podID="d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" containerID="60ba3279eb3a9ca09f7c929f727814921a6570dbe53e2ff6c2621c78bf414567" exitCode=0 Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.425649 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" event={"ID":"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0","Type":"ContainerDied","Data":"60ba3279eb3a9ca09f7c929f727814921a6570dbe53e2ff6c2621c78bf414567"} Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.680808 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.843785 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2lkk\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-kube-api-access-w2lkk\") pod \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.844377 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-trusted-ca\") pod \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.844622 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.844673 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-registry-tls\") pod \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.844722 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-installation-pull-secrets\") pod \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.844776 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-bound-sa-token\") pod \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.844818 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-registry-certificates\") pod \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.844901 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-ca-trust-extracted\") pod \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\" (UID: \"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0\") " Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.845350 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.847396 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.851721 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.852531 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.852663 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.854181 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-kube-api-access-w2lkk" (OuterVolumeSpecName: "kube-api-access-w2lkk") pod "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0"). InnerVolumeSpecName "kube-api-access-w2lkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.870441 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.891128 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" (UID: "d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.946292 4711 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.946341 4711 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.946361 4711 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.946378 4711 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.946396 4711 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.946413 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2lkk\" (UniqueName: \"kubernetes.io/projected/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-kube-api-access-w2lkk\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:31 crc kubenswrapper[4711]: I1202 10:20:31.946430 4711 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:32 crc kubenswrapper[4711]: I1202 10:20:32.438223 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" event={"ID":"d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0","Type":"ContainerDied","Data":"85ac36a1471a71c665f7dc8c1139c07a9fbca9df659ea7249168264eae3aba50"} Dec 02 10:20:32 crc kubenswrapper[4711]: I1202 10:20:32.438301 4711 scope.go:117] "RemoveContainer" containerID="60ba3279eb3a9ca09f7c929f727814921a6570dbe53e2ff6c2621c78bf414567" Dec 02 10:20:32 crc kubenswrapper[4711]: I1202 10:20:32.438540 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jc7xv" Dec 02 10:20:32 crc kubenswrapper[4711]: I1202 10:20:32.490161 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jc7xv"] Dec 02 10:20:32 crc kubenswrapper[4711]: I1202 10:20:32.495642 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jc7xv"] Dec 02 10:20:33 crc kubenswrapper[4711]: I1202 10:20:33.085217 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" path="/var/lib/kubelet/pods/d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0/volumes" Dec 02 10:22:22 crc kubenswrapper[4711]: I1202 10:22:22.586638 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:22:22 crc kubenswrapper[4711]: I1202 10:22:22.587584 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:22:31 crc kubenswrapper[4711]: I1202 10:22:31.270280 4711 scope.go:117] "RemoveContainer" containerID="1e48c1a8f5447e662df44305da2641363e22ae249841622e164f2a72b15586a5" Dec 02 10:22:52 crc kubenswrapper[4711]: I1202 10:22:52.585676 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:22:52 crc kubenswrapper[4711]: I1202 10:22:52.586614 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:23:22 crc kubenswrapper[4711]: I1202 10:23:22.586242 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:23:22 crc kubenswrapper[4711]: I1202 10:23:22.587024 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:23:22 crc kubenswrapper[4711]: I1202 10:23:22.587197 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:23:22 crc kubenswrapper[4711]: I1202 10:23:22.588400 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8486a79de57fb21bfeb54f206501206d2dc4aa2ed4f085965d4cb08f9a3874d"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:23:22 crc kubenswrapper[4711]: I1202 10:23:22.588529 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://a8486a79de57fb21bfeb54f206501206d2dc4aa2ed4f085965d4cb08f9a3874d" gracePeriod=600 Dec 02 10:23:23 crc kubenswrapper[4711]: I1202 10:23:23.645228 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="a8486a79de57fb21bfeb54f206501206d2dc4aa2ed4f085965d4cb08f9a3874d" exitCode=0 Dec 02 10:23:23 crc kubenswrapper[4711]: I1202 10:23:23.645264 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"a8486a79de57fb21bfeb54f206501206d2dc4aa2ed4f085965d4cb08f9a3874d"} Dec 02 10:23:23 crc kubenswrapper[4711]: I1202 10:23:23.645886 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"e29911e1e38ecebeadbbef681ae791a5f19b2f30398553fbf3d8e99e960526fb"} Dec 02 10:23:23 crc kubenswrapper[4711]: I1202 10:23:23.645973 4711 scope.go:117] "RemoveContainer" containerID="1e120b0a0cbb6d1edafb4930cf20f647e52a7929050bec77e5ac0b462823f904" Dec 02 10:23:31 crc kubenswrapper[4711]: I1202 10:23:31.310767 4711 scope.go:117] "RemoveContainer" containerID="aa00950bfe44c2caf2780563df003e784560612550204d741f3577f865dcd38b" Dec 02 10:23:31 crc kubenswrapper[4711]: I1202 10:23:31.341426 4711 scope.go:117] "RemoveContainer" containerID="6bb831207ce722686e67f328f4161321da80653864d97d1c02e285fa196310d4" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.597207 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rvx4g"] Dec 02 10:25:02 crc kubenswrapper[4711]: E1202 10:25:02.598773 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" containerName="registry" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.598854 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" containerName="registry" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.599025 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="d31bf5d5-5c80-4e32-ac69-6fb031c2cdf0" containerName="registry" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.599545 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-rvx4g" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.604019 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-mht85"] Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.606919 4711 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-r479c" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.607106 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-mht85" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.607494 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-h2pxz"] Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.608186 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-h2pxz" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.612994 4711 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-m5jsc" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.613437 4711 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-d98bt" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.613865 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.616800 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.634901 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rvx4g"] Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.648841 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-mht85"] Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.654056 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-h2pxz"] Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.728200 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwgzj\" (UniqueName: \"kubernetes.io/projected/beca3d6d-1017-4654-a1e1-6539558badf4-kube-api-access-kwgzj\") pod \"cert-manager-webhook-5655c58dd6-h2pxz\" (UID: \"beca3d6d-1017-4654-a1e1-6539558badf4\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-h2pxz" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.728254 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xct44\" (UniqueName: \"kubernetes.io/projected/9f001212-3824-41ea-a836-63c46277f629-kube-api-access-xct44\") pod \"cert-manager-cainjector-7f985d654d-mht85\" (UID: \"9f001212-3824-41ea-a836-63c46277f629\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-mht85" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.728287 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7vj4\" (UniqueName: \"kubernetes.io/projected/1deeebfb-423b-4a73-a76a-da43ae5dd8a9-kube-api-access-x7vj4\") pod \"cert-manager-5b446d88c5-rvx4g\" (UID: \"1deeebfb-423b-4a73-a76a-da43ae5dd8a9\") " pod="cert-manager/cert-manager-5b446d88c5-rvx4g" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.829842 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwgzj\" (UniqueName: \"kubernetes.io/projected/beca3d6d-1017-4654-a1e1-6539558badf4-kube-api-access-kwgzj\") pod \"cert-manager-webhook-5655c58dd6-h2pxz\" (UID: \"beca3d6d-1017-4654-a1e1-6539558badf4\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-h2pxz" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.829897 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xct44\" (UniqueName: \"kubernetes.io/projected/9f001212-3824-41ea-a836-63c46277f629-kube-api-access-xct44\") pod \"cert-manager-cainjector-7f985d654d-mht85\" (UID: \"9f001212-3824-41ea-a836-63c46277f629\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-mht85" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.829939 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7vj4\" (UniqueName: \"kubernetes.io/projected/1deeebfb-423b-4a73-a76a-da43ae5dd8a9-kube-api-access-x7vj4\") pod \"cert-manager-5b446d88c5-rvx4g\" (UID: \"1deeebfb-423b-4a73-a76a-da43ae5dd8a9\") " pod="cert-manager/cert-manager-5b446d88c5-rvx4g" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.859595 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7vj4\" (UniqueName: \"kubernetes.io/projected/1deeebfb-423b-4a73-a76a-da43ae5dd8a9-kube-api-access-x7vj4\") pod \"cert-manager-5b446d88c5-rvx4g\" (UID: \"1deeebfb-423b-4a73-a76a-da43ae5dd8a9\") " pod="cert-manager/cert-manager-5b446d88c5-rvx4g" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.860435 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwgzj\" (UniqueName: \"kubernetes.io/projected/beca3d6d-1017-4654-a1e1-6539558badf4-kube-api-access-kwgzj\") pod \"cert-manager-webhook-5655c58dd6-h2pxz\" (UID: \"beca3d6d-1017-4654-a1e1-6539558badf4\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-h2pxz" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.865213 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xct44\" (UniqueName: \"kubernetes.io/projected/9f001212-3824-41ea-a836-63c46277f629-kube-api-access-xct44\") pod \"cert-manager-cainjector-7f985d654d-mht85\" (UID: \"9f001212-3824-41ea-a836-63c46277f629\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-mht85" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.927496 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-rvx4g" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.941313 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-h2pxz" Dec 02 10:25:02 crc kubenswrapper[4711]: I1202 10:25:02.947266 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-mht85" Dec 02 10:25:03 crc kubenswrapper[4711]: I1202 10:25:03.187004 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-h2pxz"] Dec 02 10:25:03 crc kubenswrapper[4711]: W1202 10:25:03.197136 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeca3d6d_1017_4654_a1e1_6539558badf4.slice/crio-22f5632fea5a68281efef079aefa2cce6f5e681f8d753f373ea1cec6839b4664 WatchSource:0}: Error finding container 22f5632fea5a68281efef079aefa2cce6f5e681f8d753f373ea1cec6839b4664: Status 404 returned error can't find the container with id 22f5632fea5a68281efef079aefa2cce6f5e681f8d753f373ea1cec6839b4664 Dec 02 10:25:03 crc kubenswrapper[4711]: I1202 10:25:03.200341 4711 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:25:03 crc kubenswrapper[4711]: I1202 10:25:03.242762 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rvx4g"] Dec 02 10:25:03 crc kubenswrapper[4711]: I1202 10:25:03.281335 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-mht85"] Dec 02 10:25:03 crc kubenswrapper[4711]: W1202 10:25:03.286499 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f001212_3824_41ea_a836_63c46277f629.slice/crio-601bc132a49b60c142885f53f13682ad59f9bf44ca64f3a4dff68f7e758485f5 WatchSource:0}: Error finding container 601bc132a49b60c142885f53f13682ad59f9bf44ca64f3a4dff68f7e758485f5: Status 404 returned error can't find the container with id 601bc132a49b60c142885f53f13682ad59f9bf44ca64f3a4dff68f7e758485f5 Dec 02 10:25:03 crc kubenswrapper[4711]: I1202 10:25:03.320249 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-mht85" event={"ID":"9f001212-3824-41ea-a836-63c46277f629","Type":"ContainerStarted","Data":"601bc132a49b60c142885f53f13682ad59f9bf44ca64f3a4dff68f7e758485f5"} Dec 02 10:25:03 crc kubenswrapper[4711]: I1202 10:25:03.321215 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-h2pxz" event={"ID":"beca3d6d-1017-4654-a1e1-6539558badf4","Type":"ContainerStarted","Data":"22f5632fea5a68281efef079aefa2cce6f5e681f8d753f373ea1cec6839b4664"} Dec 02 10:25:03 crc kubenswrapper[4711]: I1202 10:25:03.322061 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rvx4g" event={"ID":"1deeebfb-423b-4a73-a76a-da43ae5dd8a9","Type":"ContainerStarted","Data":"84a4b4e6e4842b246f88bb5b6f957f971be95c9196eb2a2542e9d80c96e21384"} Dec 02 10:25:07 crc kubenswrapper[4711]: I1202 10:25:07.344640 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-h2pxz" event={"ID":"beca3d6d-1017-4654-a1e1-6539558badf4","Type":"ContainerStarted","Data":"8bf1ef37796a43e5003731497de31a95df80ee77c5e82ffe0dd872ca1e55831a"} Dec 02 10:25:07 crc kubenswrapper[4711]: I1202 10:25:07.345070 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-h2pxz" Dec 02 10:25:07 crc kubenswrapper[4711]: I1202 10:25:07.346545 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rvx4g" event={"ID":"1deeebfb-423b-4a73-a76a-da43ae5dd8a9","Type":"ContainerStarted","Data":"dc96a584d853d58812388fe1495ce9dd164352f1e02d82bb0af690af99f4dc31"} Dec 02 10:25:07 crc kubenswrapper[4711]: I1202 10:25:07.348107 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-mht85" event={"ID":"9f001212-3824-41ea-a836-63c46277f629","Type":"ContainerStarted","Data":"cf3ddc1e7cfb26e4c56151f376a4213769c377db17e067237dcfc5cf7d9113bb"} Dec 02 10:25:07 crc kubenswrapper[4711]: I1202 10:25:07.360794 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-h2pxz" podStartSLOduration=1.68368936 podStartE2EDuration="5.360728366s" podCreationTimestamp="2025-12-02 10:25:02 +0000 UTC" firstStartedPulling="2025-12-02 10:25:03.200046535 +0000 UTC m=+692.909412982" lastFinishedPulling="2025-12-02 10:25:06.877085551 +0000 UTC m=+696.586451988" observedRunningTime="2025-12-02 10:25:07.359710626 +0000 UTC m=+697.069077073" watchObservedRunningTime="2025-12-02 10:25:07.360728366 +0000 UTC m=+697.070094833" Dec 02 10:25:07 crc kubenswrapper[4711]: I1202 10:25:07.381803 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-rvx4g" podStartSLOduration=1.696291321 podStartE2EDuration="5.381788351s" podCreationTimestamp="2025-12-02 10:25:02 +0000 UTC" firstStartedPulling="2025-12-02 10:25:03.255990673 +0000 UTC m=+692.965357120" lastFinishedPulling="2025-12-02 10:25:06.941487663 +0000 UTC m=+696.650854150" observedRunningTime="2025-12-02 10:25:07.378857317 +0000 UTC m=+697.088223764" watchObservedRunningTime="2025-12-02 10:25:07.381788351 +0000 UTC m=+697.091154798" Dec 02 10:25:07 crc kubenswrapper[4711]: I1202 10:25:07.410455 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-mht85" podStartSLOduration=1.8326512510000001 podStartE2EDuration="5.410429284s" podCreationTimestamp="2025-12-02 10:25:02 +0000 UTC" firstStartedPulling="2025-12-02 10:25:03.292775231 +0000 UTC m=+693.002141678" lastFinishedPulling="2025-12-02 10:25:06.870553264 +0000 UTC m=+696.579919711" observedRunningTime="2025-12-02 10:25:07.40472269 +0000 UTC m=+697.114089147" watchObservedRunningTime="2025-12-02 10:25:07.410429284 +0000 UTC m=+697.119795771" Dec 02 10:25:12 crc kubenswrapper[4711]: I1202 10:25:12.945548 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-h2pxz" Dec 02 10:25:22 crc kubenswrapper[4711]: I1202 10:25:22.586066 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:25:22 crc kubenswrapper[4711]: I1202 10:25:22.586839 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:25:22 crc kubenswrapper[4711]: I1202 10:25:22.771330 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n6sdh"] Dec 02 10:25:22 crc kubenswrapper[4711]: I1202 10:25:22.772347 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="northd" containerID="cri-o://7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99" gracePeriod=30 Dec 02 10:25:22 crc kubenswrapper[4711]: I1202 10:25:22.772351 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28" gracePeriod=30 Dec 02 10:25:22 crc kubenswrapper[4711]: I1202 10:25:22.772428 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovn-acl-logging" containerID="cri-o://a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5" gracePeriod=30 Dec 02 10:25:22 crc kubenswrapper[4711]: I1202 10:25:22.772307 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="nbdb" containerID="cri-o://5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59" gracePeriod=30 Dec 02 10:25:22 crc kubenswrapper[4711]: I1202 10:25:22.772408 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="sbdb" containerID="cri-o://dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c" gracePeriod=30 Dec 02 10:25:22 crc kubenswrapper[4711]: I1202 10:25:22.772381 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="kube-rbac-proxy-node" containerID="cri-o://46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274" gracePeriod=30 Dec 02 10:25:22 crc kubenswrapper[4711]: I1202 10:25:22.772762 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovn-controller" containerID="cri-o://31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8" gracePeriod=30 Dec 02 10:25:22 crc kubenswrapper[4711]: I1202 10:25:22.839034 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" containerID="cri-o://d1ed0bad318795fe5c82bf6bdb102e95cf4225a843f297d4a2cf129f71292667" gracePeriod=30 Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.457194 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovnkube-controller/3.log" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.465053 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovn-acl-logging/0.log" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.466065 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovn-controller/0.log" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.466932 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="d1ed0bad318795fe5c82bf6bdb102e95cf4225a843f297d4a2cf129f71292667" exitCode=0 Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467027 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c" exitCode=0 Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467020 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"d1ed0bad318795fe5c82bf6bdb102e95cf4225a843f297d4a2cf129f71292667"} Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467046 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59" exitCode=0 Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467114 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c"} Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467147 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59"} Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467156 4711 scope.go:117] "RemoveContainer" containerID="24e8d3fff74bbcb9e9a289934676a348f91f3b53073520385949b62a5d228726" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467172 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99"} Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467123 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99" exitCode=0 Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467219 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28" exitCode=0 Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467250 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274" exitCode=0 Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467275 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5" exitCode=143 Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467294 4711 generic.go:334] "Generic (PLEG): container finished" podID="064b98c4-b388-4c62-bcbc-11037274acdb" containerID="31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8" exitCode=143 Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467339 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28"} Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467407 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274"} Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467465 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5"} Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.467484 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8"} Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.470721 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4qrj7_2fab88a2-3875-44a4-a926-7c76836b51b8/kube-multus/2.log" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.471642 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4qrj7_2fab88a2-3875-44a4-a926-7c76836b51b8/kube-multus/1.log" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.471713 4711 generic.go:334] "Generic (PLEG): container finished" podID="2fab88a2-3875-44a4-a926-7c76836b51b8" containerID="6b8753459d7fb04fe0374db1e644abb403557d98f0fa752fbe976882092f8082" exitCode=2 Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.471768 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4qrj7" event={"ID":"2fab88a2-3875-44a4-a926-7c76836b51b8","Type":"ContainerDied","Data":"6b8753459d7fb04fe0374db1e644abb403557d98f0fa752fbe976882092f8082"} Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.472573 4711 scope.go:117] "RemoveContainer" containerID="6b8753459d7fb04fe0374db1e644abb403557d98f0fa752fbe976882092f8082" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.473080 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4qrj7_openshift-multus(2fab88a2-3875-44a4-a926-7c76836b51b8)\"" pod="openshift-multus/multus-4qrj7" podUID="2fab88a2-3875-44a4-a926-7c76836b51b8" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.501059 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovn-acl-logging/0.log" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.502010 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovn-controller/0.log" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.503175 4711 scope.go:117] "RemoveContainer" containerID="783758559e488193674070f1f799f346d9860076ca5dc332d736daf1c9e290f6" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.503674 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528079 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-run-ovn-kubernetes\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528181 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-kubelet\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528246 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-cni-netd\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528268 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528363 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528346 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-ovn\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528435 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528576 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-log-socket\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528549 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528675 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-log-socket" (OuterVolumeSpecName: "log-socket") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528782 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-systemd\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528822 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-node-log\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.528928 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-openvswitch\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.529145 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-node-log" (OuterVolumeSpecName: "node-log") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.529365 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.530134 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-ovnkube-script-lib\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.532123 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68skn\" (UniqueName: \"kubernetes.io/projected/064b98c4-b388-4c62-bcbc-11037274acdb-kube-api-access-68skn\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.532239 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-systemd-units\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.532355 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.532520 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-cni-bin\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.531857 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.533847 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.533909 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.533865 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-ovnkube-config\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.533996 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-var-lib-openvswitch\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534112 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534144 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-slash\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534195 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-run-netns\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534231 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-etc-openvswitch\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534275 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/064b98c4-b388-4c62-bcbc-11037274acdb-ovn-node-metrics-cert\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534293 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-slash" (OuterVolumeSpecName: "host-slash") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534351 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-env-overrides\") pod \"064b98c4-b388-4c62-bcbc-11037274acdb\" (UID: \"064b98c4-b388-4c62-bcbc-11037274acdb\") " Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534388 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534428 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534834 4711 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534860 4711 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534878 4711 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534894 4711 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-log-socket\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534911 4711 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-node-log\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534927 4711 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534944 4711 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.534990 4711 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.535007 4711 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.535024 4711 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.535042 4711 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-slash\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.535062 4711 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.535078 4711 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.535094 4711 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.540781 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.541093 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.541288 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064b98c4-b388-4c62-bcbc-11037274acdb-kube-api-access-68skn" (OuterVolumeSpecName: "kube-api-access-68skn") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "kube-api-access-68skn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.541441 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064b98c4-b388-4c62-bcbc-11037274acdb-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.542133 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.560537 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "064b98c4-b388-4c62-bcbc-11037274acdb" (UID: "064b98c4-b388-4c62-bcbc-11037274acdb"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598424 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jt6dc"] Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.598683 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598697 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.598715 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="nbdb" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598722 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="nbdb" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.598732 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="kube-rbac-proxy-node" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598740 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="kube-rbac-proxy-node" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.598755 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovn-acl-logging" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598763 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovn-acl-logging" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.598772 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="kubecfg-setup" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598779 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="kubecfg-setup" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.598791 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="northd" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598798 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="northd" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.598809 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598815 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.598825 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovn-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598832 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovn-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.598842 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="sbdb" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598849 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="sbdb" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.598857 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598864 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.598871 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598877 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.598886 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.598893 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599024 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="nbdb" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599039 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="sbdb" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599045 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599056 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovn-acl-logging" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599068 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="kube-rbac-proxy-node" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599079 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovn-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599089 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599097 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599109 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599118 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599127 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="northd" Dec 02 10:25:23 crc kubenswrapper[4711]: E1202 10:25:23.599243 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599254 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.599353 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" containerName="ovnkube-controller" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.602143 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.635831 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae710057-a4e8-424b-9e3f-77d4811fe89b-env-overrides\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.635905 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.635934 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae710057-a4e8-424b-9e3f-77d4811fe89b-ovnkube-config\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636005 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae710057-a4e8-424b-9e3f-77d4811fe89b-ovn-node-metrics-cert\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636061 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-run-ovn\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636097 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-var-lib-openvswitch\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636119 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-run-systemd\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636155 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-cni-bin\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636196 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-run-ovn-kubernetes\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636223 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-systemd-units\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636242 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-log-socket\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636265 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-node-log\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636290 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-etc-openvswitch\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636322 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-run-openvswitch\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636345 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae710057-a4e8-424b-9e3f-77d4811fe89b-ovnkube-script-lib\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636370 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-kubelet\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636395 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khwss\" (UniqueName: \"kubernetes.io/projected/ae710057-a4e8-424b-9e3f-77d4811fe89b-kube-api-access-khwss\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636419 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-run-netns\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636442 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-cni-netd\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636628 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-slash\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636672 4711 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/064b98c4-b388-4c62-bcbc-11037274acdb-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636688 4711 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636700 4711 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636712 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68skn\" (UniqueName: \"kubernetes.io/projected/064b98c4-b388-4c62-bcbc-11037274acdb-kube-api-access-68skn\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636725 4711 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/064b98c4-b388-4c62-bcbc-11037274acdb-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.636738 4711 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/064b98c4-b388-4c62-bcbc-11037274acdb-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738254 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-slash\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738304 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae710057-a4e8-424b-9e3f-77d4811fe89b-env-overrides\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738500 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738516 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae710057-a4e8-424b-9e3f-77d4811fe89b-ovnkube-config\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738531 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae710057-a4e8-424b-9e3f-77d4811fe89b-ovn-node-metrics-cert\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738559 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-run-ovn\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738576 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-var-lib-openvswitch\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738592 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-run-systemd\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738614 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-cni-bin\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738634 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-run-ovn-kubernetes\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738653 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-systemd-units\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738666 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-log-socket\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738680 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-node-log\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738698 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-etc-openvswitch\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738717 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-run-openvswitch\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738732 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae710057-a4e8-424b-9e3f-77d4811fe89b-ovnkube-script-lib\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738750 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-kubelet\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738767 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khwss\" (UniqueName: \"kubernetes.io/projected/ae710057-a4e8-424b-9e3f-77d4811fe89b-kube-api-access-khwss\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738786 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-run-netns\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738802 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-cni-netd\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738856 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-cni-netd\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.738892 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739269 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-log-socket\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739527 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae710057-a4e8-424b-9e3f-77d4811fe89b-env-overrides\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739621 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-slash\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739706 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae710057-a4e8-424b-9e3f-77d4811fe89b-ovnkube-script-lib\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739743 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-node-log\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739767 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-etc-openvswitch\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739786 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-run-openvswitch\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739806 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-run-netns\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739825 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-kubelet\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739867 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-run-systemd\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739887 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-run-ovn\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739906 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-var-lib-openvswitch\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739927 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-run-ovn-kubernetes\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739945 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-host-cni-bin\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.739997 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae710057-a4e8-424b-9e3f-77d4811fe89b-systemd-units\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.740260 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae710057-a4e8-424b-9e3f-77d4811fe89b-ovnkube-config\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.743161 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae710057-a4e8-424b-9e3f-77d4811fe89b-ovn-node-metrics-cert\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.757403 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khwss\" (UniqueName: \"kubernetes.io/projected/ae710057-a4e8-424b-9e3f-77d4811fe89b-kube-api-access-khwss\") pod \"ovnkube-node-jt6dc\" (UID: \"ae710057-a4e8-424b-9e3f-77d4811fe89b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: I1202 10:25:23.922468 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:23 crc kubenswrapper[4711]: W1202 10:25:23.953654 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae710057_a4e8_424b_9e3f_77d4811fe89b.slice/crio-6cd9643362120fe3ad17ec2c3b17e3009b49b8b4bc2b842b1cdd5480b2462e5f WatchSource:0}: Error finding container 6cd9643362120fe3ad17ec2c3b17e3009b49b8b4bc2b842b1cdd5480b2462e5f: Status 404 returned error can't find the container with id 6cd9643362120fe3ad17ec2c3b17e3009b49b8b4bc2b842b1cdd5480b2462e5f Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.480848 4711 generic.go:334] "Generic (PLEG): container finished" podID="ae710057-a4e8-424b-9e3f-77d4811fe89b" containerID="72bb66bdc81fb4d720786f2b3bf73e5af42a080ef7fd613df78b1f917f64f385" exitCode=0 Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.480945 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" event={"ID":"ae710057-a4e8-424b-9e3f-77d4811fe89b","Type":"ContainerDied","Data":"72bb66bdc81fb4d720786f2b3bf73e5af42a080ef7fd613df78b1f917f64f385"} Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.481068 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" event={"ID":"ae710057-a4e8-424b-9e3f-77d4811fe89b","Type":"ContainerStarted","Data":"6cd9643362120fe3ad17ec2c3b17e3009b49b8b4bc2b842b1cdd5480b2462e5f"} Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.490894 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovn-acl-logging/0.log" Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.491789 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n6sdh_064b98c4-b388-4c62-bcbc-11037274acdb/ovn-controller/0.log" Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.492382 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" event={"ID":"064b98c4-b388-4c62-bcbc-11037274acdb","Type":"ContainerDied","Data":"a916b32f1d2ea1899786f3bad28d11d5cc126432e31994d471e82f91b4e3153b"} Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.492445 4711 scope.go:117] "RemoveContainer" containerID="d1ed0bad318795fe5c82bf6bdb102e95cf4225a843f297d4a2cf129f71292667" Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.492635 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n6sdh" Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.495685 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4qrj7_2fab88a2-3875-44a4-a926-7c76836b51b8/kube-multus/2.log" Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.522144 4711 scope.go:117] "RemoveContainer" containerID="dc8f2350789a7906e5865b1bc1e2718acde945f86b81a3a8b79150ebdfa80b2c" Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.545680 4711 scope.go:117] "RemoveContainer" containerID="5c194ad181a01d2b8b017a7b2cc56c824c64e6ea9b1169f8e8f75816c2687f59" Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.600734 4711 scope.go:117] "RemoveContainer" containerID="7923d9642298f3ac9c1a3f02a3de8955d14f71f1a182c84a45ab514b30284a99" Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.613008 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n6sdh"] Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.619021 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n6sdh"] Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.624944 4711 scope.go:117] "RemoveContainer" containerID="0bddb4ab7d935c8642a48cc3d72c5ce197a85ae9b5b658eafc6d75cee37a5b28" Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.643832 4711 scope.go:117] "RemoveContainer" containerID="46f777da4a207763450b34c2c12ed1fd966464bdad7ea799c0f1b1b1c0cde274" Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.666836 4711 scope.go:117] "RemoveContainer" containerID="a34c64eff4956c0a250fcf7dafed259abbda7c6ac1b613ad6ec0bbafe70f03b5" Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.686279 4711 scope.go:117] "RemoveContainer" containerID="31a3761a7cf5a4e969794e8556ce052739ca032745249a78e22b185ef61ee9f8" Dec 02 10:25:24 crc kubenswrapper[4711]: I1202 10:25:24.720139 4711 scope.go:117] "RemoveContainer" containerID="1559b3480dd38eb21b4575cb1251952dc28e82d65d1686939068a0094d46e387" Dec 02 10:25:25 crc kubenswrapper[4711]: I1202 10:25:25.089526 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064b98c4-b388-4c62-bcbc-11037274acdb" path="/var/lib/kubelet/pods/064b98c4-b388-4c62-bcbc-11037274acdb/volumes" Dec 02 10:25:25 crc kubenswrapper[4711]: I1202 10:25:25.508480 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" event={"ID":"ae710057-a4e8-424b-9e3f-77d4811fe89b","Type":"ContainerStarted","Data":"4f8c44ab39084b4f3cbdb5853af5dda8652a7b657016f88e301f302230802dd6"} Dec 02 10:25:25 crc kubenswrapper[4711]: I1202 10:25:25.508859 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" event={"ID":"ae710057-a4e8-424b-9e3f-77d4811fe89b","Type":"ContainerStarted","Data":"a9de9c80a1daafd184499eb1318b4167fe3545bd412afd5129acf40a41b8c416"} Dec 02 10:25:25 crc kubenswrapper[4711]: I1202 10:25:25.508884 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" event={"ID":"ae710057-a4e8-424b-9e3f-77d4811fe89b","Type":"ContainerStarted","Data":"d426c05e7210c980077ea4911b8ea7262699c6b65ede7169810afc85d23849b2"} Dec 02 10:25:25 crc kubenswrapper[4711]: I1202 10:25:25.508903 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" event={"ID":"ae710057-a4e8-424b-9e3f-77d4811fe89b","Type":"ContainerStarted","Data":"5f322b6c4d2222ae464578a274c59a59b5864747d179f697c83902fc4d9f1d25"} Dec 02 10:25:25 crc kubenswrapper[4711]: I1202 10:25:25.508926 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" event={"ID":"ae710057-a4e8-424b-9e3f-77d4811fe89b","Type":"ContainerStarted","Data":"51a28056abf50f02167b076ec2b51d8f305301348386b7b27cfe9e597064a0e3"} Dec 02 10:25:25 crc kubenswrapper[4711]: I1202 10:25:25.508942 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" event={"ID":"ae710057-a4e8-424b-9e3f-77d4811fe89b","Type":"ContainerStarted","Data":"7e69758398e1d0d9d3c02d9be0ae49b6eba83a7a214158446c5806d3814ff3ed"} Dec 02 10:25:28 crc kubenswrapper[4711]: I1202 10:25:28.543473 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" event={"ID":"ae710057-a4e8-424b-9e3f-77d4811fe89b","Type":"ContainerStarted","Data":"3ad5f6fd36a8da44cfe516f482051f22afad48d56808ee957a42a99d545813f9"} Dec 02 10:25:30 crc kubenswrapper[4711]: I1202 10:25:30.562459 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" event={"ID":"ae710057-a4e8-424b-9e3f-77d4811fe89b","Type":"ContainerStarted","Data":"7383ddf2d54c5802a752cbd83726f26175f707b92e6e9f061f1dac3e9fb92f59"} Dec 02 10:25:30 crc kubenswrapper[4711]: I1202 10:25:30.563753 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:30 crc kubenswrapper[4711]: I1202 10:25:30.591729 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:30 crc kubenswrapper[4711]: I1202 10:25:30.597807 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" podStartSLOduration=7.597776808 podStartE2EDuration="7.597776808s" podCreationTimestamp="2025-12-02 10:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:25:30.594267169 +0000 UTC m=+720.303633676" watchObservedRunningTime="2025-12-02 10:25:30.597776808 +0000 UTC m=+720.307143255" Dec 02 10:25:31 crc kubenswrapper[4711]: I1202 10:25:31.568163 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:31 crc kubenswrapper[4711]: I1202 10:25:31.570475 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:31 crc kubenswrapper[4711]: I1202 10:25:31.600652 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:37 crc kubenswrapper[4711]: I1202 10:25:37.078231 4711 scope.go:117] "RemoveContainer" containerID="6b8753459d7fb04fe0374db1e644abb403557d98f0fa752fbe976882092f8082" Dec 02 10:25:37 crc kubenswrapper[4711]: E1202 10:25:37.079076 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4qrj7_openshift-multus(2fab88a2-3875-44a4-a926-7c76836b51b8)\"" pod="openshift-multus/multus-4qrj7" podUID="2fab88a2-3875-44a4-a926-7c76836b51b8" Dec 02 10:25:50 crc kubenswrapper[4711]: I1202 10:25:50.078679 4711 scope.go:117] "RemoveContainer" containerID="6b8753459d7fb04fe0374db1e644abb403557d98f0fa752fbe976882092f8082" Dec 02 10:25:50 crc kubenswrapper[4711]: I1202 10:25:50.692627 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4qrj7_2fab88a2-3875-44a4-a926-7c76836b51b8/kube-multus/2.log" Dec 02 10:25:50 crc kubenswrapper[4711]: I1202 10:25:50.693104 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4qrj7" event={"ID":"2fab88a2-3875-44a4-a926-7c76836b51b8","Type":"ContainerStarted","Data":"85343335f1684d67263f7ba445c80753fd78b3d6f02c503f2de6ddc57ed0faaa"} Dec 02 10:25:52 crc kubenswrapper[4711]: I1202 10:25:52.585563 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:25:52 crc kubenswrapper[4711]: I1202 10:25:52.586053 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:25:53 crc kubenswrapper[4711]: I1202 10:25:53.955738 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jt6dc" Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.773452 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9"] Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.776769 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.782616 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.802851 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9"] Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.818919 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33ec735d-946b-49d3-b1a0-4cb8d263647b-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9\" (UID: \"33ec735d-946b-49d3-b1a0-4cb8d263647b\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.819116 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33ec735d-946b-49d3-b1a0-4cb8d263647b-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9\" (UID: \"33ec735d-946b-49d3-b1a0-4cb8d263647b\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.819202 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fl56\" (UniqueName: \"kubernetes.io/projected/33ec735d-946b-49d3-b1a0-4cb8d263647b-kube-api-access-8fl56\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9\" (UID: \"33ec735d-946b-49d3-b1a0-4cb8d263647b\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.920238 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33ec735d-946b-49d3-b1a0-4cb8d263647b-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9\" (UID: \"33ec735d-946b-49d3-b1a0-4cb8d263647b\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.920340 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33ec735d-946b-49d3-b1a0-4cb8d263647b-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9\" (UID: \"33ec735d-946b-49d3-b1a0-4cb8d263647b\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.920414 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fl56\" (UniqueName: \"kubernetes.io/projected/33ec735d-946b-49d3-b1a0-4cb8d263647b-kube-api-access-8fl56\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9\" (UID: \"33ec735d-946b-49d3-b1a0-4cb8d263647b\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.921212 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33ec735d-946b-49d3-b1a0-4cb8d263647b-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9\" (UID: \"33ec735d-946b-49d3-b1a0-4cb8d263647b\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.921271 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33ec735d-946b-49d3-b1a0-4cb8d263647b-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9\" (UID: \"33ec735d-946b-49d3-b1a0-4cb8d263647b\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:25:56 crc kubenswrapper[4711]: I1202 10:25:56.945022 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fl56\" (UniqueName: \"kubernetes.io/projected/33ec735d-946b-49d3-b1a0-4cb8d263647b-kube-api-access-8fl56\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9\" (UID: \"33ec735d-946b-49d3-b1a0-4cb8d263647b\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:25:57 crc kubenswrapper[4711]: I1202 10:25:57.101603 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:25:57 crc kubenswrapper[4711]: I1202 10:25:57.300682 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9"] Dec 02 10:25:57 crc kubenswrapper[4711]: I1202 10:25:57.739437 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" event={"ID":"33ec735d-946b-49d3-b1a0-4cb8d263647b","Type":"ContainerStarted","Data":"6a6919f60769f8a5a50fa123d92812e836b4885abfece70da8ca6627b717d9a6"} Dec 02 10:25:58 crc kubenswrapper[4711]: I1202 10:25:58.746419 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" event={"ID":"33ec735d-946b-49d3-b1a0-4cb8d263647b","Type":"ContainerStarted","Data":"7c02ce0f485fa3c109dfa2d0077eb3e4ab3ade1ee6107ef5c8c5311062798c55"} Dec 02 10:25:59 crc kubenswrapper[4711]: I1202 10:25:59.757203 4711 generic.go:334] "Generic (PLEG): container finished" podID="33ec735d-946b-49d3-b1a0-4cb8d263647b" containerID="7c02ce0f485fa3c109dfa2d0077eb3e4ab3ade1ee6107ef5c8c5311062798c55" exitCode=0 Dec 02 10:25:59 crc kubenswrapper[4711]: I1202 10:25:59.757313 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" event={"ID":"33ec735d-946b-49d3-b1a0-4cb8d263647b","Type":"ContainerDied","Data":"7c02ce0f485fa3c109dfa2d0077eb3e4ab3ade1ee6107ef5c8c5311062798c55"} Dec 02 10:26:01 crc kubenswrapper[4711]: I1202 10:26:01.806182 4711 generic.go:334] "Generic (PLEG): container finished" podID="33ec735d-946b-49d3-b1a0-4cb8d263647b" containerID="7975de0d8ef49387c78734428a5a2eabc87809a1141c895c022ee5ce3fbf3c20" exitCode=0 Dec 02 10:26:01 crc kubenswrapper[4711]: I1202 10:26:01.806260 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" event={"ID":"33ec735d-946b-49d3-b1a0-4cb8d263647b","Type":"ContainerDied","Data":"7975de0d8ef49387c78734428a5a2eabc87809a1141c895c022ee5ce3fbf3c20"} Dec 02 10:26:02 crc kubenswrapper[4711]: I1202 10:26:02.817100 4711 generic.go:334] "Generic (PLEG): container finished" podID="33ec735d-946b-49d3-b1a0-4cb8d263647b" containerID="91f487b79610852418d6680445460f6d2ba4c8dfec1180c3a9e7e6ba821047cd" exitCode=0 Dec 02 10:26:02 crc kubenswrapper[4711]: I1202 10:26:02.817160 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" event={"ID":"33ec735d-946b-49d3-b1a0-4cb8d263647b","Type":"ContainerDied","Data":"91f487b79610852418d6680445460f6d2ba4c8dfec1180c3a9e7e6ba821047cd"} Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.129160 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.314642 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33ec735d-946b-49d3-b1a0-4cb8d263647b-util\") pod \"33ec735d-946b-49d3-b1a0-4cb8d263647b\" (UID: \"33ec735d-946b-49d3-b1a0-4cb8d263647b\") " Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.314793 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33ec735d-946b-49d3-b1a0-4cb8d263647b-bundle\") pod \"33ec735d-946b-49d3-b1a0-4cb8d263647b\" (UID: \"33ec735d-946b-49d3-b1a0-4cb8d263647b\") " Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.314855 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fl56\" (UniqueName: \"kubernetes.io/projected/33ec735d-946b-49d3-b1a0-4cb8d263647b-kube-api-access-8fl56\") pod \"33ec735d-946b-49d3-b1a0-4cb8d263647b\" (UID: \"33ec735d-946b-49d3-b1a0-4cb8d263647b\") " Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.316336 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33ec735d-946b-49d3-b1a0-4cb8d263647b-bundle" (OuterVolumeSpecName: "bundle") pod "33ec735d-946b-49d3-b1a0-4cb8d263647b" (UID: "33ec735d-946b-49d3-b1a0-4cb8d263647b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.321565 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ec735d-946b-49d3-b1a0-4cb8d263647b-kube-api-access-8fl56" (OuterVolumeSpecName: "kube-api-access-8fl56") pod "33ec735d-946b-49d3-b1a0-4cb8d263647b" (UID: "33ec735d-946b-49d3-b1a0-4cb8d263647b"). InnerVolumeSpecName "kube-api-access-8fl56". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.349454 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33ec735d-946b-49d3-b1a0-4cb8d263647b-util" (OuterVolumeSpecName: "util") pod "33ec735d-946b-49d3-b1a0-4cb8d263647b" (UID: "33ec735d-946b-49d3-b1a0-4cb8d263647b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.416272 4711 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33ec735d-946b-49d3-b1a0-4cb8d263647b-util\") on node \"crc\" DevicePath \"\"" Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.416373 4711 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33ec735d-946b-49d3-b1a0-4cb8d263647b-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.416403 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fl56\" (UniqueName: \"kubernetes.io/projected/33ec735d-946b-49d3-b1a0-4cb8d263647b-kube-api-access-8fl56\") on node \"crc\" DevicePath \"\"" Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.835133 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" event={"ID":"33ec735d-946b-49d3-b1a0-4cb8d263647b","Type":"ContainerDied","Data":"6a6919f60769f8a5a50fa123d92812e836b4885abfece70da8ca6627b717d9a6"} Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.835244 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6919f60769f8a5a50fa123d92812e836b4885abfece70da8ca6627b717d9a6" Dec 02 10:26:04 crc kubenswrapper[4711]: I1202 10:26:04.835403 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.382293 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-t599q"] Dec 02 10:26:08 crc kubenswrapper[4711]: E1202 10:26:08.382795 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ec735d-946b-49d3-b1a0-4cb8d263647b" containerName="pull" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.382829 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ec735d-946b-49d3-b1a0-4cb8d263647b" containerName="pull" Dec 02 10:26:08 crc kubenswrapper[4711]: E1202 10:26:08.382852 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ec735d-946b-49d3-b1a0-4cb8d263647b" containerName="util" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.382860 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ec735d-946b-49d3-b1a0-4cb8d263647b" containerName="util" Dec 02 10:26:08 crc kubenswrapper[4711]: E1202 10:26:08.382871 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ec735d-946b-49d3-b1a0-4cb8d263647b" containerName="extract" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.382880 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ec735d-946b-49d3-b1a0-4cb8d263647b" containerName="extract" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.383019 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ec735d-946b-49d3-b1a0-4cb8d263647b" containerName="extract" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.383441 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-t599q" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.386122 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-q5c6r" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.387578 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.387606 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.393732 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-t599q"] Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.571452 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jntsz\" (UniqueName: \"kubernetes.io/projected/edf95574-1178-4d62-b5b2-7dd68fce39da-kube-api-access-jntsz\") pod \"nmstate-operator-5b5b58f5c8-t599q\" (UID: \"edf95574-1178-4d62-b5b2-7dd68fce39da\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-t599q" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.672495 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jntsz\" (UniqueName: \"kubernetes.io/projected/edf95574-1178-4d62-b5b2-7dd68fce39da-kube-api-access-jntsz\") pod \"nmstate-operator-5b5b58f5c8-t599q\" (UID: \"edf95574-1178-4d62-b5b2-7dd68fce39da\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-t599q" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.699635 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jntsz\" (UniqueName: \"kubernetes.io/projected/edf95574-1178-4d62-b5b2-7dd68fce39da-kube-api-access-jntsz\") pod \"nmstate-operator-5b5b58f5c8-t599q\" (UID: \"edf95574-1178-4d62-b5b2-7dd68fce39da\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-t599q" Dec 02 10:26:08 crc kubenswrapper[4711]: I1202 10:26:08.706432 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-t599q" Dec 02 10:26:09 crc kubenswrapper[4711]: I1202 10:26:09.138500 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-t599q"] Dec 02 10:26:09 crc kubenswrapper[4711]: I1202 10:26:09.865743 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-t599q" event={"ID":"edf95574-1178-4d62-b5b2-7dd68fce39da","Type":"ContainerStarted","Data":"1fd1571ac16d3fa1d1a0c8deca56381c0bead37626820a16ce53ea594afc0b06"} Dec 02 10:26:11 crc kubenswrapper[4711]: I1202 10:26:11.878013 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-t599q" event={"ID":"edf95574-1178-4d62-b5b2-7dd68fce39da","Type":"ContainerStarted","Data":"f73f0582e4d23620d719780fae0a8478e077976835ee0ad5e784789c2b87a7ac"} Dec 02 10:26:11 crc kubenswrapper[4711]: I1202 10:26:11.907423 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-t599q" podStartSLOduration=1.9567584390000001 podStartE2EDuration="3.907390863s" podCreationTimestamp="2025-12-02 10:26:08 +0000 UTC" firstStartedPulling="2025-12-02 10:26:09.142513281 +0000 UTC m=+758.851879728" lastFinishedPulling="2025-12-02 10:26:11.093145705 +0000 UTC m=+760.802512152" observedRunningTime="2025-12-02 10:26:11.903406021 +0000 UTC m=+761.612772528" watchObservedRunningTime="2025-12-02 10:26:11.907390863 +0000 UTC m=+761.616757350" Dec 02 10:26:13 crc kubenswrapper[4711]: I1202 10:26:13.667482 4711 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.357021 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft"] Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.358362 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.360708 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.360768 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gv8n9" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.368626 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-ddk76"] Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.369752 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddk76" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.378214 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft"] Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.396811 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-ddk76"] Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.402559 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-6ndb5"] Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.403862 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.499836 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e515d3a0-2428-4629-833b-f23af0d11b10-ovs-socket\") pod \"nmstate-handler-6ndb5\" (UID: \"e515d3a0-2428-4629-833b-f23af0d11b10\") " pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.499880 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e515d3a0-2428-4629-833b-f23af0d11b10-dbus-socket\") pod \"nmstate-handler-6ndb5\" (UID: \"e515d3a0-2428-4629-833b-f23af0d11b10\") " pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.499918 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tdbg\" (UniqueName: \"kubernetes.io/projected/5b290675-5c06-445c-a50b-34ac2ba80718-kube-api-access-9tdbg\") pod \"nmstate-webhook-5f6d4c5ccb-qz4ft\" (UID: \"5b290675-5c06-445c-a50b-34ac2ba80718\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.499998 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkfjs\" (UniqueName: \"kubernetes.io/projected/2fa89fd1-9149-414e-8214-c1bbb1563330-kube-api-access-nkfjs\") pod \"nmstate-metrics-7f946cbc9-ddk76\" (UID: \"2fa89fd1-9149-414e-8214-c1bbb1563330\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddk76" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.500014 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blqj8\" (UniqueName: \"kubernetes.io/projected/e515d3a0-2428-4629-833b-f23af0d11b10-kube-api-access-blqj8\") pod \"nmstate-handler-6ndb5\" (UID: \"e515d3a0-2428-4629-833b-f23af0d11b10\") " pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.500061 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5b290675-5c06-445c-a50b-34ac2ba80718-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-qz4ft\" (UID: \"5b290675-5c06-445c-a50b-34ac2ba80718\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.500076 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e515d3a0-2428-4629-833b-f23af0d11b10-nmstate-lock\") pod \"nmstate-handler-6ndb5\" (UID: \"e515d3a0-2428-4629-833b-f23af0d11b10\") " pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.520837 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9"] Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.521852 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.523183 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-77fbb" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.523611 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.523614 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.558063 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9"] Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.601596 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e515d3a0-2428-4629-833b-f23af0d11b10-ovs-socket\") pod \"nmstate-handler-6ndb5\" (UID: \"e515d3a0-2428-4629-833b-f23af0d11b10\") " pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.601649 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e515d3a0-2428-4629-833b-f23af0d11b10-dbus-socket\") pod \"nmstate-handler-6ndb5\" (UID: \"e515d3a0-2428-4629-833b-f23af0d11b10\") " pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.601700 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tdbg\" (UniqueName: \"kubernetes.io/projected/5b290675-5c06-445c-a50b-34ac2ba80718-kube-api-access-9tdbg\") pod \"nmstate-webhook-5f6d4c5ccb-qz4ft\" (UID: \"5b290675-5c06-445c-a50b-34ac2ba80718\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.601722 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e515d3a0-2428-4629-833b-f23af0d11b10-ovs-socket\") pod \"nmstate-handler-6ndb5\" (UID: \"e515d3a0-2428-4629-833b-f23af0d11b10\") " pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.601760 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkfjs\" (UniqueName: \"kubernetes.io/projected/2fa89fd1-9149-414e-8214-c1bbb1563330-kube-api-access-nkfjs\") pod \"nmstate-metrics-7f946cbc9-ddk76\" (UID: \"2fa89fd1-9149-414e-8214-c1bbb1563330\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddk76" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.601786 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blqj8\" (UniqueName: \"kubernetes.io/projected/e515d3a0-2428-4629-833b-f23af0d11b10-kube-api-access-blqj8\") pod \"nmstate-handler-6ndb5\" (UID: \"e515d3a0-2428-4629-833b-f23af0d11b10\") " pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.601827 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5b290675-5c06-445c-a50b-34ac2ba80718-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-qz4ft\" (UID: \"5b290675-5c06-445c-a50b-34ac2ba80718\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.601850 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e515d3a0-2428-4629-833b-f23af0d11b10-nmstate-lock\") pod \"nmstate-handler-6ndb5\" (UID: \"e515d3a0-2428-4629-833b-f23af0d11b10\") " pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.601943 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e515d3a0-2428-4629-833b-f23af0d11b10-nmstate-lock\") pod \"nmstate-handler-6ndb5\" (UID: \"e515d3a0-2428-4629-833b-f23af0d11b10\") " pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.602124 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e515d3a0-2428-4629-833b-f23af0d11b10-dbus-socket\") pod \"nmstate-handler-6ndb5\" (UID: \"e515d3a0-2428-4629-833b-f23af0d11b10\") " pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: E1202 10:26:17.602211 4711 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 02 10:26:17 crc kubenswrapper[4711]: E1202 10:26:17.602434 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b290675-5c06-445c-a50b-34ac2ba80718-tls-key-pair podName:5b290675-5c06-445c-a50b-34ac2ba80718 nodeName:}" failed. No retries permitted until 2025-12-02 10:26:18.102370509 +0000 UTC m=+767.811736996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5b290675-5c06-445c-a50b-34ac2ba80718-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-qz4ft" (UID: "5b290675-5c06-445c-a50b-34ac2ba80718") : secret "openshift-nmstate-webhook" not found Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.629214 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkfjs\" (UniqueName: \"kubernetes.io/projected/2fa89fd1-9149-414e-8214-c1bbb1563330-kube-api-access-nkfjs\") pod \"nmstate-metrics-7f946cbc9-ddk76\" (UID: \"2fa89fd1-9149-414e-8214-c1bbb1563330\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddk76" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.634372 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blqj8\" (UniqueName: \"kubernetes.io/projected/e515d3a0-2428-4629-833b-f23af0d11b10-kube-api-access-blqj8\") pod \"nmstate-handler-6ndb5\" (UID: \"e515d3a0-2428-4629-833b-f23af0d11b10\") " pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.637245 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tdbg\" (UniqueName: \"kubernetes.io/projected/5b290675-5c06-445c-a50b-34ac2ba80718-kube-api-access-9tdbg\") pod \"nmstate-webhook-5f6d4c5ccb-qz4ft\" (UID: \"5b290675-5c06-445c-a50b-34ac2ba80718\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.678139 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-58fcf6886f-ttg9m"] Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.679199 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.689235 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58fcf6886f-ttg9m"] Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.703363 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4f84ccb3-491d-4453-aaab-89e33441a3e5-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-b6vc9\" (UID: \"4f84ccb3-491d-4453-aaab-89e33441a3e5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.703728 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qlb5\" (UniqueName: \"kubernetes.io/projected/4f84ccb3-491d-4453-aaab-89e33441a3e5-kube-api-access-6qlb5\") pod \"nmstate-console-plugin-7fbb5f6569-b6vc9\" (UID: \"4f84ccb3-491d-4453-aaab-89e33441a3e5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.703863 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f84ccb3-491d-4453-aaab-89e33441a3e5-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-b6vc9\" (UID: \"4f84ccb3-491d-4453-aaab-89e33441a3e5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.710860 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddk76" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.741067 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.804807 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f84ccb3-491d-4453-aaab-89e33441a3e5-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-b6vc9\" (UID: \"4f84ccb3-491d-4453-aaab-89e33441a3e5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.804866 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55f91c4f-a5d0-4739-9305-9dfb4f75f494-service-ca\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.804888 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55f91c4f-a5d0-4739-9305-9dfb4f75f494-console-config\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.804971 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55f91c4f-a5d0-4739-9305-9dfb4f75f494-console-serving-cert\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.804993 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55f91c4f-a5d0-4739-9305-9dfb4f75f494-oauth-serving-cert\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.805010 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4f84ccb3-491d-4453-aaab-89e33441a3e5-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-b6vc9\" (UID: \"4f84ccb3-491d-4453-aaab-89e33441a3e5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" Dec 02 10:26:17 crc kubenswrapper[4711]: E1202 10:26:17.805024 4711 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 02 10:26:17 crc kubenswrapper[4711]: E1202 10:26:17.805091 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f84ccb3-491d-4453-aaab-89e33441a3e5-plugin-serving-cert podName:4f84ccb3-491d-4453-aaab-89e33441a3e5 nodeName:}" failed. No retries permitted until 2025-12-02 10:26:18.305073349 +0000 UTC m=+768.014439796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/4f84ccb3-491d-4453-aaab-89e33441a3e5-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-b6vc9" (UID: "4f84ccb3-491d-4453-aaab-89e33441a3e5") : secret "plugin-serving-cert" not found Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.805027 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55f91c4f-a5d0-4739-9305-9dfb4f75f494-trusted-ca-bundle\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.805374 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55f91c4f-a5d0-4739-9305-9dfb4f75f494-console-oauth-config\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.805396 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qlb5\" (UniqueName: \"kubernetes.io/projected/4f84ccb3-491d-4453-aaab-89e33441a3e5-kube-api-access-6qlb5\") pod \"nmstate-console-plugin-7fbb5f6569-b6vc9\" (UID: \"4f84ccb3-491d-4453-aaab-89e33441a3e5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.805422 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm6pt\" (UniqueName: \"kubernetes.io/projected/55f91c4f-a5d0-4739-9305-9dfb4f75f494-kube-api-access-tm6pt\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.805878 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4f84ccb3-491d-4453-aaab-89e33441a3e5-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-b6vc9\" (UID: \"4f84ccb3-491d-4453-aaab-89e33441a3e5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.826181 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qlb5\" (UniqueName: \"kubernetes.io/projected/4f84ccb3-491d-4453-aaab-89e33441a3e5-kube-api-access-6qlb5\") pod \"nmstate-console-plugin-7fbb5f6569-b6vc9\" (UID: \"4f84ccb3-491d-4453-aaab-89e33441a3e5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.906518 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55f91c4f-a5d0-4739-9305-9dfb4f75f494-console-config\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.906591 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55f91c4f-a5d0-4739-9305-9dfb4f75f494-console-serving-cert\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.906614 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55f91c4f-a5d0-4739-9305-9dfb4f75f494-oauth-serving-cert\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.906631 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55f91c4f-a5d0-4739-9305-9dfb4f75f494-trusted-ca-bundle\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.906669 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55f91c4f-a5d0-4739-9305-9dfb4f75f494-console-oauth-config\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.906697 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm6pt\" (UniqueName: \"kubernetes.io/projected/55f91c4f-a5d0-4739-9305-9dfb4f75f494-kube-api-access-tm6pt\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.907265 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55f91c4f-a5d0-4739-9305-9dfb4f75f494-service-ca\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.907614 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55f91c4f-a5d0-4739-9305-9dfb4f75f494-oauth-serving-cert\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.907712 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55f91c4f-a5d0-4739-9305-9dfb4f75f494-trusted-ca-bundle\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.907924 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55f91c4f-a5d0-4739-9305-9dfb4f75f494-service-ca\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.908167 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55f91c4f-a5d0-4739-9305-9dfb4f75f494-console-config\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.909670 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55f91c4f-a5d0-4739-9305-9dfb4f75f494-console-oauth-config\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.911346 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55f91c4f-a5d0-4739-9305-9dfb4f75f494-console-serving-cert\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.913314 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-6ndb5" event={"ID":"e515d3a0-2428-4629-833b-f23af0d11b10","Type":"ContainerStarted","Data":"d39e48f59ab226e34a09740ecf89df861346a5c21587c4135ebd5db8143c3f4e"} Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.925366 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm6pt\" (UniqueName: \"kubernetes.io/projected/55f91c4f-a5d0-4739-9305-9dfb4f75f494-kube-api-access-tm6pt\") pod \"console-58fcf6886f-ttg9m\" (UID: \"55f91c4f-a5d0-4739-9305-9dfb4f75f494\") " pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:17 crc kubenswrapper[4711]: I1202 10:26:17.999012 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.113148 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5b290675-5c06-445c-a50b-34ac2ba80718-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-qz4ft\" (UID: \"5b290675-5c06-445c-a50b-34ac2ba80718\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.117551 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5b290675-5c06-445c-a50b-34ac2ba80718-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-qz4ft\" (UID: \"5b290675-5c06-445c-a50b-34ac2ba80718\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.138684 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-ddk76"] Dec 02 10:26:18 crc kubenswrapper[4711]: W1202 10:26:18.148674 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fa89fd1_9149_414e_8214_c1bbb1563330.slice/crio-804f5651bd83d38618a480a03222953de40bc1c99d457b1f2965629367902d39 WatchSource:0}: Error finding container 804f5651bd83d38618a480a03222953de40bc1c99d457b1f2965629367902d39: Status 404 returned error can't find the container with id 804f5651bd83d38618a480a03222953de40bc1c99d457b1f2965629367902d39 Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.282032 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58fcf6886f-ttg9m"] Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.287219 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" Dec 02 10:26:18 crc kubenswrapper[4711]: W1202 10:26:18.287574 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55f91c4f_a5d0_4739_9305_9dfb4f75f494.slice/crio-492bd6f41e970fc8b9eb7546ca15105d50d4f74ea38708e099bdbf47e7d0437a WatchSource:0}: Error finding container 492bd6f41e970fc8b9eb7546ca15105d50d4f74ea38708e099bdbf47e7d0437a: Status 404 returned error can't find the container with id 492bd6f41e970fc8b9eb7546ca15105d50d4f74ea38708e099bdbf47e7d0437a Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.319312 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f84ccb3-491d-4453-aaab-89e33441a3e5-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-b6vc9\" (UID: \"4f84ccb3-491d-4453-aaab-89e33441a3e5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.328844 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f84ccb3-491d-4453-aaab-89e33441a3e5-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-b6vc9\" (UID: \"4f84ccb3-491d-4453-aaab-89e33441a3e5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.435527 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.506630 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft"] Dec 02 10:26:18 crc kubenswrapper[4711]: W1202 10:26:18.508632 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b290675_5c06_445c_a50b_34ac2ba80718.slice/crio-4d039fb9fcbecac4e9e004dd0ba48851047e5b133c81853be41752141f64a18c WatchSource:0}: Error finding container 4d039fb9fcbecac4e9e004dd0ba48851047e5b133c81853be41752141f64a18c: Status 404 returned error can't find the container with id 4d039fb9fcbecac4e9e004dd0ba48851047e5b133c81853be41752141f64a18c Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.680396 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9"] Dec 02 10:26:18 crc kubenswrapper[4711]: W1202 10:26:18.686732 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f84ccb3_491d_4453_aaab_89e33441a3e5.slice/crio-5cf361dee4b5a4dcec21b9fd51a7304821efe72ba352e95ab307521b001e8a37 WatchSource:0}: Error finding container 5cf361dee4b5a4dcec21b9fd51a7304821efe72ba352e95ab307521b001e8a37: Status 404 returned error can't find the container with id 5cf361dee4b5a4dcec21b9fd51a7304821efe72ba352e95ab307521b001e8a37 Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.919416 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" event={"ID":"4f84ccb3-491d-4453-aaab-89e33441a3e5","Type":"ContainerStarted","Data":"5cf361dee4b5a4dcec21b9fd51a7304821efe72ba352e95ab307521b001e8a37"} Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.920084 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" event={"ID":"5b290675-5c06-445c-a50b-34ac2ba80718","Type":"ContainerStarted","Data":"4d039fb9fcbecac4e9e004dd0ba48851047e5b133c81853be41752141f64a18c"} Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.920985 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddk76" event={"ID":"2fa89fd1-9149-414e-8214-c1bbb1563330","Type":"ContainerStarted","Data":"804f5651bd83d38618a480a03222953de40bc1c99d457b1f2965629367902d39"} Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.921981 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58fcf6886f-ttg9m" event={"ID":"55f91c4f-a5d0-4739-9305-9dfb4f75f494","Type":"ContainerStarted","Data":"2c028529ed2295ee446490b0ef641f7614e73f4365cd8eb27c576f376ef697c4"} Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.922007 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58fcf6886f-ttg9m" event={"ID":"55f91c4f-a5d0-4739-9305-9dfb4f75f494","Type":"ContainerStarted","Data":"492bd6f41e970fc8b9eb7546ca15105d50d4f74ea38708e099bdbf47e7d0437a"} Dec 02 10:26:18 crc kubenswrapper[4711]: I1202 10:26:18.936722 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-58fcf6886f-ttg9m" podStartSLOduration=1.9366949199999999 podStartE2EDuration="1.93669492s" podCreationTimestamp="2025-12-02 10:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:26:18.934354893 +0000 UTC m=+768.643721340" watchObservedRunningTime="2025-12-02 10:26:18.93669492 +0000 UTC m=+768.646061367" Dec 02 10:26:20 crc kubenswrapper[4711]: I1202 10:26:20.942843 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" event={"ID":"5b290675-5c06-445c-a50b-34ac2ba80718","Type":"ContainerStarted","Data":"0e82c880a613f833e1ee967da2600db0b5c74a365e1db4536094657fe299164a"} Dec 02 10:26:20 crc kubenswrapper[4711]: I1202 10:26:20.944351 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" Dec 02 10:26:20 crc kubenswrapper[4711]: I1202 10:26:20.945428 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddk76" event={"ID":"2fa89fd1-9149-414e-8214-c1bbb1563330","Type":"ContainerStarted","Data":"a31b4481dc6cfdef178959e5c25de7d2942254282dd974b1e1e809d44d41cb7c"} Dec 02 10:26:20 crc kubenswrapper[4711]: I1202 10:26:20.948112 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-6ndb5" event={"ID":"e515d3a0-2428-4629-833b-f23af0d11b10","Type":"ContainerStarted","Data":"8495768649ec8fbf72f419d43e8c24c64efe2f913013d82d7390de7c4782a3f8"} Dec 02 10:26:20 crc kubenswrapper[4711]: I1202 10:26:20.948261 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:20 crc kubenswrapper[4711]: I1202 10:26:20.959998 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" podStartSLOduration=2.27208423 podStartE2EDuration="3.959935796s" podCreationTimestamp="2025-12-02 10:26:17 +0000 UTC" firstStartedPulling="2025-12-02 10:26:18.511212941 +0000 UTC m=+768.220579388" lastFinishedPulling="2025-12-02 10:26:20.199064467 +0000 UTC m=+769.908430954" observedRunningTime="2025-12-02 10:26:20.957011263 +0000 UTC m=+770.666377740" watchObservedRunningTime="2025-12-02 10:26:20.959935796 +0000 UTC m=+770.669302243" Dec 02 10:26:20 crc kubenswrapper[4711]: I1202 10:26:20.978088 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-6ndb5" podStartSLOduration=1.550506433 podStartE2EDuration="3.978057928s" podCreationTimestamp="2025-12-02 10:26:17 +0000 UTC" firstStartedPulling="2025-12-02 10:26:17.77114489 +0000 UTC m=+767.480511337" lastFinishedPulling="2025-12-02 10:26:20.198696375 +0000 UTC m=+769.908062832" observedRunningTime="2025-12-02 10:26:20.977612436 +0000 UTC m=+770.686978903" watchObservedRunningTime="2025-12-02 10:26:20.978057928 +0000 UTC m=+770.687424385" Dec 02 10:26:21 crc kubenswrapper[4711]: I1202 10:26:21.969493 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" event={"ID":"4f84ccb3-491d-4453-aaab-89e33441a3e5","Type":"ContainerStarted","Data":"52b4a701a7185a90147b8f9c27d21859a4360746988eb73dde499b8861c4a854"} Dec 02 10:26:21 crc kubenswrapper[4711]: I1202 10:26:21.988649 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-b6vc9" podStartSLOduration=2.4161218030000002 podStartE2EDuration="4.988586645s" podCreationTimestamp="2025-12-02 10:26:17 +0000 UTC" firstStartedPulling="2025-12-02 10:26:18.688624087 +0000 UTC m=+768.397990544" lastFinishedPulling="2025-12-02 10:26:21.261088939 +0000 UTC m=+770.970455386" observedRunningTime="2025-12-02 10:26:21.985996862 +0000 UTC m=+771.695363399" watchObservedRunningTime="2025-12-02 10:26:21.988586645 +0000 UTC m=+771.697953132" Dec 02 10:26:22 crc kubenswrapper[4711]: I1202 10:26:22.586556 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:26:22 crc kubenswrapper[4711]: I1202 10:26:22.586718 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:26:22 crc kubenswrapper[4711]: I1202 10:26:22.586805 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:26:22 crc kubenswrapper[4711]: I1202 10:26:22.588266 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e29911e1e38ecebeadbbef681ae791a5f19b2f30398553fbf3d8e99e960526fb"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:26:22 crc kubenswrapper[4711]: I1202 10:26:22.588431 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://e29911e1e38ecebeadbbef681ae791a5f19b2f30398553fbf3d8e99e960526fb" gracePeriod=600 Dec 02 10:26:22 crc kubenswrapper[4711]: I1202 10:26:22.977407 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="e29911e1e38ecebeadbbef681ae791a5f19b2f30398553fbf3d8e99e960526fb" exitCode=0 Dec 02 10:26:22 crc kubenswrapper[4711]: I1202 10:26:22.977421 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"e29911e1e38ecebeadbbef681ae791a5f19b2f30398553fbf3d8e99e960526fb"} Dec 02 10:26:22 crc kubenswrapper[4711]: I1202 10:26:22.977833 4711 scope.go:117] "RemoveContainer" containerID="a8486a79de57fb21bfeb54f206501206d2dc4aa2ed4f085965d4cb08f9a3874d" Dec 02 10:26:23 crc kubenswrapper[4711]: I1202 10:26:23.986712 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddk76" event={"ID":"2fa89fd1-9149-414e-8214-c1bbb1563330","Type":"ContainerStarted","Data":"a240c75e85cacc31a963e005e41ac6652f42d56a6f147b51c78b124c00119111"} Dec 02 10:26:24 crc kubenswrapper[4711]: I1202 10:26:24.001465 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"8c4568791abe9bd7256ecd483bef73160af4505d06199fa89bd749115edf5f3a"} Dec 02 10:26:24 crc kubenswrapper[4711]: I1202 10:26:24.014054 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ddk76" podStartSLOduration=2.241001292 podStartE2EDuration="7.014032443s" podCreationTimestamp="2025-12-02 10:26:17 +0000 UTC" firstStartedPulling="2025-12-02 10:26:18.151345628 +0000 UTC m=+767.860712085" lastFinishedPulling="2025-12-02 10:26:22.924376789 +0000 UTC m=+772.633743236" observedRunningTime="2025-12-02 10:26:24.010486194 +0000 UTC m=+773.719852661" watchObservedRunningTime="2025-12-02 10:26:24.014032443 +0000 UTC m=+773.723398880" Dec 02 10:26:27 crc kubenswrapper[4711]: I1202 10:26:27.773983 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-6ndb5" Dec 02 10:26:28 crc kubenswrapper[4711]: I1202 10:26:27.999978 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:28 crc kubenswrapper[4711]: I1202 10:26:28.000075 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:28 crc kubenswrapper[4711]: I1202 10:26:28.008573 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:28 crc kubenswrapper[4711]: I1202 10:26:28.035172 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-58fcf6886f-ttg9m" Dec 02 10:26:28 crc kubenswrapper[4711]: I1202 10:26:28.094137 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g2lxx"] Dec 02 10:26:38 crc kubenswrapper[4711]: I1202 10:26:38.296700 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-qz4ft" Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.582189 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz"] Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.583919 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.586579 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.600791 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz"] Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.625406 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz\" (UID: \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.625492 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz\" (UID: \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.625527 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp6d5\" (UniqueName: \"kubernetes.io/projected/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-kube-api-access-vp6d5\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz\" (UID: \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.726474 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz\" (UID: \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.726560 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz\" (UID: \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.726594 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp6d5\" (UniqueName: \"kubernetes.io/projected/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-kube-api-access-vp6d5\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz\" (UID: \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.727450 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz\" (UID: \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.727547 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz\" (UID: \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.757537 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp6d5\" (UniqueName: \"kubernetes.io/projected/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-kube-api-access-vp6d5\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz\" (UID: \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:26:53 crc kubenswrapper[4711]: I1202 10:26:53.905547 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:26:54 crc kubenswrapper[4711]: I1202 10:26:54.093247 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-g2lxx" podUID="8c1f70ef-1183-4621-bb91-ffe2d31fa391" containerName="console" containerID="cri-o://82f8131babe3018bc6848c4a50122f7ca74b1e010d2392d24cc4cad392b779f1" gracePeriod=15 Dec 02 10:26:54 crc kubenswrapper[4711]: I1202 10:26:54.201344 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz"] Dec 02 10:26:54 crc kubenswrapper[4711]: W1202 10:26:54.221214 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cda2fc5_6d4e_4770_8e98_2139ee2cee6c.slice/crio-0a696a4d409fe75ea0e5d0816c3a6e90ff17ef8af01d208715988a476dfcacbe WatchSource:0}: Error finding container 0a696a4d409fe75ea0e5d0816c3a6e90ff17ef8af01d208715988a476dfcacbe: Status 404 returned error can't find the container with id 0a696a4d409fe75ea0e5d0816c3a6e90ff17ef8af01d208715988a476dfcacbe Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.227624 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g2lxx_8c1f70ef-1183-4621-bb91-ffe2d31fa391/console/0.log" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.227715 4711 generic.go:334] "Generic (PLEG): container finished" podID="8c1f70ef-1183-4621-bb91-ffe2d31fa391" containerID="82f8131babe3018bc6848c4a50122f7ca74b1e010d2392d24cc4cad392b779f1" exitCode=2 Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.227895 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2lxx" event={"ID":"8c1f70ef-1183-4621-bb91-ffe2d31fa391","Type":"ContainerDied","Data":"82f8131babe3018bc6848c4a50122f7ca74b1e010d2392d24cc4cad392b779f1"} Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.230983 4711 generic.go:334] "Generic (PLEG): container finished" podID="2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" containerID="98f2a0c2370d2e062057752185b2fa507061777a2876433eeb1706c5ad05b612" exitCode=0 Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.231051 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" event={"ID":"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c","Type":"ContainerDied","Data":"98f2a0c2370d2e062057752185b2fa507061777a2876433eeb1706c5ad05b612"} Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.231094 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" event={"ID":"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c","Type":"ContainerStarted","Data":"0a696a4d409fe75ea0e5d0816c3a6e90ff17ef8af01d208715988a476dfcacbe"} Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.745253 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g2lxx_8c1f70ef-1183-4621-bb91-ffe2d31fa391/console/0.log" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.745668 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.857402 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-trusted-ca-bundle\") pod \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.857496 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-oauth-serving-cert\") pod \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.857525 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwzlg\" (UniqueName: \"kubernetes.io/projected/8c1f70ef-1183-4621-bb91-ffe2d31fa391-kube-api-access-cwzlg\") pod \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.857550 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-serving-cert\") pod \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.857570 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-service-ca\") pod \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.857599 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-oauth-config\") pod \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.857628 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-config\") pod \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\" (UID: \"8c1f70ef-1183-4621-bb91-ffe2d31fa391\") " Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.858425 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-service-ca" (OuterVolumeSpecName: "service-ca") pod "8c1f70ef-1183-4621-bb91-ffe2d31fa391" (UID: "8c1f70ef-1183-4621-bb91-ffe2d31fa391"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.858450 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8c1f70ef-1183-4621-bb91-ffe2d31fa391" (UID: "8c1f70ef-1183-4621-bb91-ffe2d31fa391"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.858544 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-config" (OuterVolumeSpecName: "console-config") pod "8c1f70ef-1183-4621-bb91-ffe2d31fa391" (UID: "8c1f70ef-1183-4621-bb91-ffe2d31fa391"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.858850 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8c1f70ef-1183-4621-bb91-ffe2d31fa391" (UID: "8c1f70ef-1183-4621-bb91-ffe2d31fa391"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.864641 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1f70ef-1183-4621-bb91-ffe2d31fa391-kube-api-access-cwzlg" (OuterVolumeSpecName: "kube-api-access-cwzlg") pod "8c1f70ef-1183-4621-bb91-ffe2d31fa391" (UID: "8c1f70ef-1183-4621-bb91-ffe2d31fa391"). InnerVolumeSpecName "kube-api-access-cwzlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.865297 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8c1f70ef-1183-4621-bb91-ffe2d31fa391" (UID: "8c1f70ef-1183-4621-bb91-ffe2d31fa391"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.865606 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8c1f70ef-1183-4621-bb91-ffe2d31fa391" (UID: "8c1f70ef-1183-4621-bb91-ffe2d31fa391"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.959439 4711 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.959468 4711 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.959479 4711 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.959487 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwzlg\" (UniqueName: \"kubernetes.io/projected/8c1f70ef-1183-4621-bb91-ffe2d31fa391-kube-api-access-cwzlg\") on node \"crc\" DevicePath \"\"" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.959500 4711 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.959512 4711 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c1f70ef-1183-4621-bb91-ffe2d31fa391-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:26:55 crc kubenswrapper[4711]: I1202 10:26:55.959544 4711 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c1f70ef-1183-4621-bb91-ffe2d31fa391-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:26:56 crc kubenswrapper[4711]: I1202 10:26:56.237253 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g2lxx_8c1f70ef-1183-4621-bb91-ffe2d31fa391/console/0.log" Dec 02 10:26:56 crc kubenswrapper[4711]: I1202 10:26:56.237308 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2lxx" event={"ID":"8c1f70ef-1183-4621-bb91-ffe2d31fa391","Type":"ContainerDied","Data":"97450e7ec449ec2329c227862bd350501e922e62a43afd50d28d6e215254dff8"} Dec 02 10:26:56 crc kubenswrapper[4711]: I1202 10:26:56.237351 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2lxx" Dec 02 10:26:56 crc kubenswrapper[4711]: I1202 10:26:56.237353 4711 scope.go:117] "RemoveContainer" containerID="82f8131babe3018bc6848c4a50122f7ca74b1e010d2392d24cc4cad392b779f1" Dec 02 10:26:56 crc kubenswrapper[4711]: I1202 10:26:56.270424 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g2lxx"] Dec 02 10:26:56 crc kubenswrapper[4711]: I1202 10:26:56.277763 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-g2lxx"] Dec 02 10:26:56 crc kubenswrapper[4711]: I1202 10:26:56.944311 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b4rnp"] Dec 02 10:26:56 crc kubenswrapper[4711]: E1202 10:26:56.944673 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1f70ef-1183-4621-bb91-ffe2d31fa391" containerName="console" Dec 02 10:26:56 crc kubenswrapper[4711]: I1202 10:26:56.944732 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1f70ef-1183-4621-bb91-ffe2d31fa391" containerName="console" Dec 02 10:26:56 crc kubenswrapper[4711]: I1202 10:26:56.944859 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1f70ef-1183-4621-bb91-ffe2d31fa391" containerName="console" Dec 02 10:26:56 crc kubenswrapper[4711]: I1202 10:26:56.945804 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:26:56 crc kubenswrapper[4711]: I1202 10:26:56.964591 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4rnp"] Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.075762 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czp97\" (UniqueName: \"kubernetes.io/projected/5fd3c858-45dc-4ced-a803-094d230c530a-kube-api-access-czp97\") pod \"redhat-operators-b4rnp\" (UID: \"5fd3c858-45dc-4ced-a803-094d230c530a\") " pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.075837 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd3c858-45dc-4ced-a803-094d230c530a-utilities\") pod \"redhat-operators-b4rnp\" (UID: \"5fd3c858-45dc-4ced-a803-094d230c530a\") " pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.075904 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd3c858-45dc-4ced-a803-094d230c530a-catalog-content\") pod \"redhat-operators-b4rnp\" (UID: \"5fd3c858-45dc-4ced-a803-094d230c530a\") " pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.085081 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1f70ef-1183-4621-bb91-ffe2d31fa391" path="/var/lib/kubelet/pods/8c1f70ef-1183-4621-bb91-ffe2d31fa391/volumes" Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.177420 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd3c858-45dc-4ced-a803-094d230c530a-catalog-content\") pod \"redhat-operators-b4rnp\" (UID: \"5fd3c858-45dc-4ced-a803-094d230c530a\") " pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.177560 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czp97\" (UniqueName: \"kubernetes.io/projected/5fd3c858-45dc-4ced-a803-094d230c530a-kube-api-access-czp97\") pod \"redhat-operators-b4rnp\" (UID: \"5fd3c858-45dc-4ced-a803-094d230c530a\") " pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.177609 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd3c858-45dc-4ced-a803-094d230c530a-utilities\") pod \"redhat-operators-b4rnp\" (UID: \"5fd3c858-45dc-4ced-a803-094d230c530a\") " pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.177829 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd3c858-45dc-4ced-a803-094d230c530a-catalog-content\") pod \"redhat-operators-b4rnp\" (UID: \"5fd3c858-45dc-4ced-a803-094d230c530a\") " pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.178121 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd3c858-45dc-4ced-a803-094d230c530a-utilities\") pod \"redhat-operators-b4rnp\" (UID: \"5fd3c858-45dc-4ced-a803-094d230c530a\") " pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.217587 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czp97\" (UniqueName: \"kubernetes.io/projected/5fd3c858-45dc-4ced-a803-094d230c530a-kube-api-access-czp97\") pod \"redhat-operators-b4rnp\" (UID: \"5fd3c858-45dc-4ced-a803-094d230c530a\") " pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.246584 4711 generic.go:334] "Generic (PLEG): container finished" podID="2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" containerID="3e28e5faf35e1b9ecab70ed6361664c9941681758e57ca1a417a80ef88b3d3b7" exitCode=0 Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.246633 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" event={"ID":"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c","Type":"ContainerDied","Data":"3e28e5faf35e1b9ecab70ed6361664c9941681758e57ca1a417a80ef88b3d3b7"} Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.307265 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:26:57 crc kubenswrapper[4711]: I1202 10:26:57.520240 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4rnp"] Dec 02 10:26:57 crc kubenswrapper[4711]: W1202 10:26:57.525030 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd3c858_45dc_4ced_a803_094d230c530a.slice/crio-8dfe4e0f23fb269dc9faed61253be97674591ccc446860ee10461fc97ed436d8 WatchSource:0}: Error finding container 8dfe4e0f23fb269dc9faed61253be97674591ccc446860ee10461fc97ed436d8: Status 404 returned error can't find the container with id 8dfe4e0f23fb269dc9faed61253be97674591ccc446860ee10461fc97ed436d8 Dec 02 10:26:58 crc kubenswrapper[4711]: I1202 10:26:58.254920 4711 generic.go:334] "Generic (PLEG): container finished" podID="2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" containerID="e97bda4836e68f5da0759227b1364eccb768a06865c193a29bf329476c9820f2" exitCode=0 Dec 02 10:26:58 crc kubenswrapper[4711]: I1202 10:26:58.255087 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" event={"ID":"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c","Type":"ContainerDied","Data":"e97bda4836e68f5da0759227b1364eccb768a06865c193a29bf329476c9820f2"} Dec 02 10:26:58 crc kubenswrapper[4711]: I1202 10:26:58.259352 4711 generic.go:334] "Generic (PLEG): container finished" podID="5fd3c858-45dc-4ced-a803-094d230c530a" containerID="7c6e193bff1e6cd91b8c007c02c3e25733d1b2bb999eef5a5c4ecd72d0dd20ca" exitCode=0 Dec 02 10:26:58 crc kubenswrapper[4711]: I1202 10:26:58.259389 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4rnp" event={"ID":"5fd3c858-45dc-4ced-a803-094d230c530a","Type":"ContainerDied","Data":"7c6e193bff1e6cd91b8c007c02c3e25733d1b2bb999eef5a5c4ecd72d0dd20ca"} Dec 02 10:26:58 crc kubenswrapper[4711]: I1202 10:26:58.259425 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4rnp" event={"ID":"5fd3c858-45dc-4ced-a803-094d230c530a","Type":"ContainerStarted","Data":"8dfe4e0f23fb269dc9faed61253be97674591ccc446860ee10461fc97ed436d8"} Dec 02 10:26:59 crc kubenswrapper[4711]: I1202 10:26:59.267594 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4rnp" event={"ID":"5fd3c858-45dc-4ced-a803-094d230c530a","Type":"ContainerStarted","Data":"defd7cce7add2e438bf729261622d37d5f89a822b2917ffa553bf9969abef179"} Dec 02 10:26:59 crc kubenswrapper[4711]: I1202 10:26:59.502565 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:26:59 crc kubenswrapper[4711]: I1202 10:26:59.503878 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp6d5\" (UniqueName: \"kubernetes.io/projected/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-kube-api-access-vp6d5\") pod \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\" (UID: \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\") " Dec 02 10:26:59 crc kubenswrapper[4711]: I1202 10:26:59.503927 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-bundle\") pod \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\" (UID: \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\") " Dec 02 10:26:59 crc kubenswrapper[4711]: I1202 10:26:59.506228 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-bundle" (OuterVolumeSpecName: "bundle") pod "2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" (UID: "2cda2fc5-6d4e-4770-8e98-2139ee2cee6c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:26:59 crc kubenswrapper[4711]: I1202 10:26:59.514669 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-kube-api-access-vp6d5" (OuterVolumeSpecName: "kube-api-access-vp6d5") pod "2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" (UID: "2cda2fc5-6d4e-4770-8e98-2139ee2cee6c"). InnerVolumeSpecName "kube-api-access-vp6d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:26:59 crc kubenswrapper[4711]: I1202 10:26:59.604689 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-util\") pod \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\" (UID: \"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c\") " Dec 02 10:26:59 crc kubenswrapper[4711]: I1202 10:26:59.605034 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp6d5\" (UniqueName: \"kubernetes.io/projected/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-kube-api-access-vp6d5\") on node \"crc\" DevicePath \"\"" Dec 02 10:26:59 crc kubenswrapper[4711]: I1202 10:26:59.605051 4711 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:26:59 crc kubenswrapper[4711]: I1202 10:26:59.618939 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-util" (OuterVolumeSpecName: "util") pod "2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" (UID: "2cda2fc5-6d4e-4770-8e98-2139ee2cee6c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:26:59 crc kubenswrapper[4711]: I1202 10:26:59.706291 4711 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cda2fc5-6d4e-4770-8e98-2139ee2cee6c-util\") on node \"crc\" DevicePath \"\"" Dec 02 10:27:00 crc kubenswrapper[4711]: I1202 10:27:00.278700 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" event={"ID":"2cda2fc5-6d4e-4770-8e98-2139ee2cee6c","Type":"ContainerDied","Data":"0a696a4d409fe75ea0e5d0816c3a6e90ff17ef8af01d208715988a476dfcacbe"} Dec 02 10:27:00 crc kubenswrapper[4711]: I1202 10:27:00.279202 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a696a4d409fe75ea0e5d0816c3a6e90ff17ef8af01d208715988a476dfcacbe" Dec 02 10:27:00 crc kubenswrapper[4711]: I1202 10:27:00.278744 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz" Dec 02 10:27:00 crc kubenswrapper[4711]: I1202 10:27:00.282097 4711 generic.go:334] "Generic (PLEG): container finished" podID="5fd3c858-45dc-4ced-a803-094d230c530a" containerID="defd7cce7add2e438bf729261622d37d5f89a822b2917ffa553bf9969abef179" exitCode=0 Dec 02 10:27:00 crc kubenswrapper[4711]: I1202 10:27:00.282178 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4rnp" event={"ID":"5fd3c858-45dc-4ced-a803-094d230c530a","Type":"ContainerDied","Data":"defd7cce7add2e438bf729261622d37d5f89a822b2917ffa553bf9969abef179"} Dec 02 10:27:01 crc kubenswrapper[4711]: I1202 10:27:01.290734 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4rnp" event={"ID":"5fd3c858-45dc-4ced-a803-094d230c530a","Type":"ContainerStarted","Data":"91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f"} Dec 02 10:27:01 crc kubenswrapper[4711]: I1202 10:27:01.318758 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b4rnp" podStartSLOduration=2.835384451 podStartE2EDuration="5.318736203s" podCreationTimestamp="2025-12-02 10:26:56 +0000 UTC" firstStartedPulling="2025-12-02 10:26:58.262232777 +0000 UTC m=+807.971599224" lastFinishedPulling="2025-12-02 10:27:00.745584489 +0000 UTC m=+810.454950976" observedRunningTime="2025-12-02 10:27:01.316655993 +0000 UTC m=+811.026022440" watchObservedRunningTime="2025-12-02 10:27:01.318736203 +0000 UTC m=+811.028102650" Dec 02 10:27:07 crc kubenswrapper[4711]: I1202 10:27:07.308382 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:27:07 crc kubenswrapper[4711]: I1202 10:27:07.308996 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:27:07 crc kubenswrapper[4711]: I1202 10:27:07.369221 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:27:07 crc kubenswrapper[4711]: I1202 10:27:07.411101 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:27:08 crc kubenswrapper[4711]: I1202 10:27:08.935355 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4rnp"] Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.341487 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b4rnp" podUID="5fd3c858-45dc-4ced-a803-094d230c530a" containerName="registry-server" containerID="cri-o://91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f" gracePeriod=2 Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.605904 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42"] Dec 02 10:27:09 crc kubenswrapper[4711]: E1202 10:27:09.606540 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" containerName="pull" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.606556 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" containerName="pull" Dec 02 10:27:09 crc kubenswrapper[4711]: E1202 10:27:09.606569 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" containerName="util" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.606576 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" containerName="util" Dec 02 10:27:09 crc kubenswrapper[4711]: E1202 10:27:09.606591 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" containerName="extract" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.606600 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" containerName="extract" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.606725 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cda2fc5-6d4e-4770-8e98-2139ee2cee6c" containerName="extract" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.607283 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.610383 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.611474 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.611605 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-txbcd" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.611682 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.611698 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.629695 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rrvz\" (UniqueName: \"kubernetes.io/projected/b61b9ed6-c590-43f9-b029-82f457a65986-kube-api-access-9rrvz\") pod \"metallb-operator-controller-manager-5c8d4f74b4-pmd42\" (UID: \"b61b9ed6-c590-43f9-b029-82f457a65986\") " pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.629738 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b61b9ed6-c590-43f9-b029-82f457a65986-webhook-cert\") pod \"metallb-operator-controller-manager-5c8d4f74b4-pmd42\" (UID: \"b61b9ed6-c590-43f9-b029-82f457a65986\") " pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.629777 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b61b9ed6-c590-43f9-b029-82f457a65986-apiservice-cert\") pod \"metallb-operator-controller-manager-5c8d4f74b4-pmd42\" (UID: \"b61b9ed6-c590-43f9-b029-82f457a65986\") " pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.632010 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42"] Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.730658 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rrvz\" (UniqueName: \"kubernetes.io/projected/b61b9ed6-c590-43f9-b029-82f457a65986-kube-api-access-9rrvz\") pod \"metallb-operator-controller-manager-5c8d4f74b4-pmd42\" (UID: \"b61b9ed6-c590-43f9-b029-82f457a65986\") " pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.730748 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b61b9ed6-c590-43f9-b029-82f457a65986-webhook-cert\") pod \"metallb-operator-controller-manager-5c8d4f74b4-pmd42\" (UID: \"b61b9ed6-c590-43f9-b029-82f457a65986\") " pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.730793 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b61b9ed6-c590-43f9-b029-82f457a65986-apiservice-cert\") pod \"metallb-operator-controller-manager-5c8d4f74b4-pmd42\" (UID: \"b61b9ed6-c590-43f9-b029-82f457a65986\") " pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.738220 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b61b9ed6-c590-43f9-b029-82f457a65986-apiservice-cert\") pod \"metallb-operator-controller-manager-5c8d4f74b4-pmd42\" (UID: \"b61b9ed6-c590-43f9-b029-82f457a65986\") " pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.740584 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b61b9ed6-c590-43f9-b029-82f457a65986-webhook-cert\") pod \"metallb-operator-controller-manager-5c8d4f74b4-pmd42\" (UID: \"b61b9ed6-c590-43f9-b029-82f457a65986\") " pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.750625 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rrvz\" (UniqueName: \"kubernetes.io/projected/b61b9ed6-c590-43f9-b029-82f457a65986-kube-api-access-9rrvz\") pod \"metallb-operator-controller-manager-5c8d4f74b4-pmd42\" (UID: \"b61b9ed6-c590-43f9-b029-82f457a65986\") " pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.811387 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.832194 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd3c858-45dc-4ced-a803-094d230c530a-utilities\") pod \"5fd3c858-45dc-4ced-a803-094d230c530a\" (UID: \"5fd3c858-45dc-4ced-a803-094d230c530a\") " Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.832238 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd3c858-45dc-4ced-a803-094d230c530a-catalog-content\") pod \"5fd3c858-45dc-4ced-a803-094d230c530a\" (UID: \"5fd3c858-45dc-4ced-a803-094d230c530a\") " Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.832260 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czp97\" (UniqueName: \"kubernetes.io/projected/5fd3c858-45dc-4ced-a803-094d230c530a-kube-api-access-czp97\") pod \"5fd3c858-45dc-4ced-a803-094d230c530a\" (UID: \"5fd3c858-45dc-4ced-a803-094d230c530a\") " Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.834633 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd3c858-45dc-4ced-a803-094d230c530a-utilities" (OuterVolumeSpecName: "utilities") pod "5fd3c858-45dc-4ced-a803-094d230c530a" (UID: "5fd3c858-45dc-4ced-a803-094d230c530a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.836145 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd3c858-45dc-4ced-a803-094d230c530a-kube-api-access-czp97" (OuterVolumeSpecName: "kube-api-access-czp97") pod "5fd3c858-45dc-4ced-a803-094d230c530a" (UID: "5fd3c858-45dc-4ced-a803-094d230c530a"). InnerVolumeSpecName "kube-api-access-czp97". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.932236 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.933978 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd3c858-45dc-4ced-a803-094d230c530a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.934124 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czp97\" (UniqueName: \"kubernetes.io/projected/5fd3c858-45dc-4ced-a803-094d230c530a-kube-api-access-czp97\") on node \"crc\" DevicePath \"\"" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.944362 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc"] Dec 02 10:27:09 crc kubenswrapper[4711]: E1202 10:27:09.944604 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd3c858-45dc-4ced-a803-094d230c530a" containerName="extract-utilities" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.944619 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd3c858-45dc-4ced-a803-094d230c530a" containerName="extract-utilities" Dec 02 10:27:09 crc kubenswrapper[4711]: E1202 10:27:09.944631 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd3c858-45dc-4ced-a803-094d230c530a" containerName="registry-server" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.944638 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd3c858-45dc-4ced-a803-094d230c530a" containerName="registry-server" Dec 02 10:27:09 crc kubenswrapper[4711]: E1202 10:27:09.944653 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd3c858-45dc-4ced-a803-094d230c530a" containerName="extract-content" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.944659 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd3c858-45dc-4ced-a803-094d230c530a" containerName="extract-content" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.944774 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd3c858-45dc-4ced-a803-094d230c530a" containerName="registry-server" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.945299 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.954554 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.954912 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.955114 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dvmcb" Dec 02 10:27:09 crc kubenswrapper[4711]: I1202 10:27:09.957877 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc"] Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.035199 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz862\" (UniqueName: \"kubernetes.io/projected/02ce4661-516f-4d17-b5b8-69958d4c4ee8-kube-api-access-tz862\") pod \"metallb-operator-webhook-server-575f6f4cd9-2pkhc\" (UID: \"02ce4661-516f-4d17-b5b8-69958d4c4ee8\") " pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.035269 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02ce4661-516f-4d17-b5b8-69958d4c4ee8-webhook-cert\") pod \"metallb-operator-webhook-server-575f6f4cd9-2pkhc\" (UID: \"02ce4661-516f-4d17-b5b8-69958d4c4ee8\") " pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.035299 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/02ce4661-516f-4d17-b5b8-69958d4c4ee8-apiservice-cert\") pod \"metallb-operator-webhook-server-575f6f4cd9-2pkhc\" (UID: \"02ce4661-516f-4d17-b5b8-69958d4c4ee8\") " pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.136683 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz862\" (UniqueName: \"kubernetes.io/projected/02ce4661-516f-4d17-b5b8-69958d4c4ee8-kube-api-access-tz862\") pod \"metallb-operator-webhook-server-575f6f4cd9-2pkhc\" (UID: \"02ce4661-516f-4d17-b5b8-69958d4c4ee8\") " pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.136762 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02ce4661-516f-4d17-b5b8-69958d4c4ee8-webhook-cert\") pod \"metallb-operator-webhook-server-575f6f4cd9-2pkhc\" (UID: \"02ce4661-516f-4d17-b5b8-69958d4c4ee8\") " pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.136805 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/02ce4661-516f-4d17-b5b8-69958d4c4ee8-apiservice-cert\") pod \"metallb-operator-webhook-server-575f6f4cd9-2pkhc\" (UID: \"02ce4661-516f-4d17-b5b8-69958d4c4ee8\") " pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.141145 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/02ce4661-516f-4d17-b5b8-69958d4c4ee8-apiservice-cert\") pod \"metallb-operator-webhook-server-575f6f4cd9-2pkhc\" (UID: \"02ce4661-516f-4d17-b5b8-69958d4c4ee8\") " pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.141608 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02ce4661-516f-4d17-b5b8-69958d4c4ee8-webhook-cert\") pod \"metallb-operator-webhook-server-575f6f4cd9-2pkhc\" (UID: \"02ce4661-516f-4d17-b5b8-69958d4c4ee8\") " pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.156935 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz862\" (UniqueName: \"kubernetes.io/projected/02ce4661-516f-4d17-b5b8-69958d4c4ee8-kube-api-access-tz862\") pod \"metallb-operator-webhook-server-575f6f4cd9-2pkhc\" (UID: \"02ce4661-516f-4d17-b5b8-69958d4c4ee8\") " pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.270257 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.352395 4711 generic.go:334] "Generic (PLEG): container finished" podID="5fd3c858-45dc-4ced-a803-094d230c530a" containerID="91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f" exitCode=0 Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.352467 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4rnp" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.352486 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4rnp" event={"ID":"5fd3c858-45dc-4ced-a803-094d230c530a","Type":"ContainerDied","Data":"91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f"} Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.353780 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4rnp" event={"ID":"5fd3c858-45dc-4ced-a803-094d230c530a","Type":"ContainerDied","Data":"8dfe4e0f23fb269dc9faed61253be97674591ccc446860ee10461fc97ed436d8"} Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.353809 4711 scope.go:117] "RemoveContainer" containerID="91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.363486 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42"] Dec 02 10:27:10 crc kubenswrapper[4711]: W1202 10:27:10.374398 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb61b9ed6_c590_43f9_b029_82f457a65986.slice/crio-d39af092cdcc59a27161378e1e5bebe7d1b80286c8c2269ef6fffe1786b074e7 WatchSource:0}: Error finding container d39af092cdcc59a27161378e1e5bebe7d1b80286c8c2269ef6fffe1786b074e7: Status 404 returned error can't find the container with id d39af092cdcc59a27161378e1e5bebe7d1b80286c8c2269ef6fffe1786b074e7 Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.381936 4711 scope.go:117] "RemoveContainer" containerID="defd7cce7add2e438bf729261622d37d5f89a822b2917ffa553bf9969abef179" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.458116 4711 scope.go:117] "RemoveContainer" containerID="7c6e193bff1e6cd91b8c007c02c3e25733d1b2bb999eef5a5c4ecd72d0dd20ca" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.494132 4711 scope.go:117] "RemoveContainer" containerID="91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f" Dec 02 10:27:10 crc kubenswrapper[4711]: E1202 10:27:10.498417 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f\": container with ID starting with 91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f not found: ID does not exist" containerID="91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.498454 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f"} err="failed to get container status \"91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f\": rpc error: code = NotFound desc = could not find container \"91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f\": container with ID starting with 91fca118502b094a38b0d96b6595b625571091f2c646299d6698079178bbd42f not found: ID does not exist" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.498482 4711 scope.go:117] "RemoveContainer" containerID="defd7cce7add2e438bf729261622d37d5f89a822b2917ffa553bf9969abef179" Dec 02 10:27:10 crc kubenswrapper[4711]: E1202 10:27:10.498934 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"defd7cce7add2e438bf729261622d37d5f89a822b2917ffa553bf9969abef179\": container with ID starting with defd7cce7add2e438bf729261622d37d5f89a822b2917ffa553bf9969abef179 not found: ID does not exist" containerID="defd7cce7add2e438bf729261622d37d5f89a822b2917ffa553bf9969abef179" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.498984 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"defd7cce7add2e438bf729261622d37d5f89a822b2917ffa553bf9969abef179"} err="failed to get container status \"defd7cce7add2e438bf729261622d37d5f89a822b2917ffa553bf9969abef179\": rpc error: code = NotFound desc = could not find container \"defd7cce7add2e438bf729261622d37d5f89a822b2917ffa553bf9969abef179\": container with ID starting with defd7cce7add2e438bf729261622d37d5f89a822b2917ffa553bf9969abef179 not found: ID does not exist" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.499012 4711 scope.go:117] "RemoveContainer" containerID="7c6e193bff1e6cd91b8c007c02c3e25733d1b2bb999eef5a5c4ecd72d0dd20ca" Dec 02 10:27:10 crc kubenswrapper[4711]: E1202 10:27:10.499431 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6e193bff1e6cd91b8c007c02c3e25733d1b2bb999eef5a5c4ecd72d0dd20ca\": container with ID starting with 7c6e193bff1e6cd91b8c007c02c3e25733d1b2bb999eef5a5c4ecd72d0dd20ca not found: ID does not exist" containerID="7c6e193bff1e6cd91b8c007c02c3e25733d1b2bb999eef5a5c4ecd72d0dd20ca" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.499455 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6e193bff1e6cd91b8c007c02c3e25733d1b2bb999eef5a5c4ecd72d0dd20ca"} err="failed to get container status \"7c6e193bff1e6cd91b8c007c02c3e25733d1b2bb999eef5a5c4ecd72d0dd20ca\": rpc error: code = NotFound desc = could not find container \"7c6e193bff1e6cd91b8c007c02c3e25733d1b2bb999eef5a5c4ecd72d0dd20ca\": container with ID starting with 7c6e193bff1e6cd91b8c007c02c3e25733d1b2bb999eef5a5c4ecd72d0dd20ca not found: ID does not exist" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.719263 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc"] Dec 02 10:27:10 crc kubenswrapper[4711]: W1202 10:27:10.731075 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02ce4661_516f_4d17_b5b8_69958d4c4ee8.slice/crio-90c76a27be81df4e8637575115609caec3ac89713f9035fff383eaa27183cc9a WatchSource:0}: Error finding container 90c76a27be81df4e8637575115609caec3ac89713f9035fff383eaa27183cc9a: Status 404 returned error can't find the container with id 90c76a27be81df4e8637575115609caec3ac89713f9035fff383eaa27183cc9a Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.853503 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd3c858-45dc-4ced-a803-094d230c530a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fd3c858-45dc-4ced-a803-094d230c530a" (UID: "5fd3c858-45dc-4ced-a803-094d230c530a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.949174 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd3c858-45dc-4ced-a803-094d230c530a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.980766 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4rnp"] Dec 02 10:27:10 crc kubenswrapper[4711]: I1202 10:27:10.987192 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b4rnp"] Dec 02 10:27:11 crc kubenswrapper[4711]: I1202 10:27:11.089833 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd3c858-45dc-4ced-a803-094d230c530a" path="/var/lib/kubelet/pods/5fd3c858-45dc-4ced-a803-094d230c530a/volumes" Dec 02 10:27:11 crc kubenswrapper[4711]: I1202 10:27:11.359923 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" event={"ID":"b61b9ed6-c590-43f9-b029-82f457a65986","Type":"ContainerStarted","Data":"d39af092cdcc59a27161378e1e5bebe7d1b80286c8c2269ef6fffe1786b074e7"} Dec 02 10:27:11 crc kubenswrapper[4711]: I1202 10:27:11.360854 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" event={"ID":"02ce4661-516f-4d17-b5b8-69958d4c4ee8","Type":"ContainerStarted","Data":"90c76a27be81df4e8637575115609caec3ac89713f9035fff383eaa27183cc9a"} Dec 02 10:27:17 crc kubenswrapper[4711]: I1202 10:27:17.417613 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" event={"ID":"b61b9ed6-c590-43f9-b029-82f457a65986","Type":"ContainerStarted","Data":"0a2c92bb3a71c4ba7264700263399b7df596d283213ad725ae0d307f66666de0"} Dec 02 10:27:17 crc kubenswrapper[4711]: I1202 10:27:17.418267 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:17 crc kubenswrapper[4711]: I1202 10:27:17.419373 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" event={"ID":"02ce4661-516f-4d17-b5b8-69958d4c4ee8","Type":"ContainerStarted","Data":"c8e3f2b35d0ccfadd6ad7fee9a8f578b15eefbfa93ab4005aef557b83928fd28"} Dec 02 10:27:17 crc kubenswrapper[4711]: I1202 10:27:17.419784 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:17 crc kubenswrapper[4711]: I1202 10:27:17.445856 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" podStartSLOduration=1.815571354 podStartE2EDuration="8.445821747s" podCreationTimestamp="2025-12-02 10:27:09 +0000 UTC" firstStartedPulling="2025-12-02 10:27:10.382015797 +0000 UTC m=+820.091382254" lastFinishedPulling="2025-12-02 10:27:17.0122662 +0000 UTC m=+826.721632647" observedRunningTime="2025-12-02 10:27:17.444614853 +0000 UTC m=+827.153981310" watchObservedRunningTime="2025-12-02 10:27:17.445821747 +0000 UTC m=+827.155188194" Dec 02 10:27:17 crc kubenswrapper[4711]: I1202 10:27:17.469198 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" podStartSLOduration=2.173428121 podStartE2EDuration="8.469176198s" podCreationTimestamp="2025-12-02 10:27:09 +0000 UTC" firstStartedPulling="2025-12-02 10:27:10.73301145 +0000 UTC m=+820.442377897" lastFinishedPulling="2025-12-02 10:27:17.028759537 +0000 UTC m=+826.738125974" observedRunningTime="2025-12-02 10:27:17.462790877 +0000 UTC m=+827.172157344" watchObservedRunningTime="2025-12-02 10:27:17.469176198 +0000 UTC m=+827.178542645" Dec 02 10:27:30 crc kubenswrapper[4711]: I1202 10:27:30.281572 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-575f6f4cd9-2pkhc" Dec 02 10:27:49 crc kubenswrapper[4711]: I1202 10:27:49.934926 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5c8d4f74b4-pmd42" Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.857091 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-gxh9q"] Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.861110 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.864538 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qbxj4" Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.868106 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846"] Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.869006 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.887810 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.887856 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.895298 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.911313 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846"] Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.973389 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fjkgx"] Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.974278 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fjkgx" Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.976905 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.977046 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4h7jg" Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.977854 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 02 10:27:50 crc kubenswrapper[4711]: I1202 10:27:50.977921 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.000727 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-z96st"] Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.005345 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.007664 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.018346 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-z96st"] Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.018665 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knnrg\" (UniqueName: \"kubernetes.io/projected/e6a8173c-9360-4880-98f8-c314de0da129-kube-api-access-knnrg\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.018709 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-t2846\" (UID: \"08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.018867 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e6a8173c-9360-4880-98f8-c314de0da129-metrics\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.019046 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e6a8173c-9360-4880-98f8-c314de0da129-frr-conf\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.019100 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e6a8173c-9360-4880-98f8-c314de0da129-reloader\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.019163 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dgzq\" (UniqueName: \"kubernetes.io/projected/08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0-kube-api-access-5dgzq\") pod \"frr-k8s-webhook-server-7fcb986d4-t2846\" (UID: \"08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.019227 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e6a8173c-9360-4880-98f8-c314de0da129-frr-startup\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.019292 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e6a8173c-9360-4880-98f8-c314de0da129-frr-sockets\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.019320 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6a8173c-9360-4880-98f8-c314de0da129-metrics-certs\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.120842 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e6a8173c-9360-4880-98f8-c314de0da129-metrics\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.120895 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-memberlist\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.120938 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e6a8173c-9360-4880-98f8-c314de0da129-frr-conf\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.120985 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e6a8173c-9360-4880-98f8-c314de0da129-reloader\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.121018 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d6e279b9-33a8-48d9-9442-be75926b530c-metallb-excludel2\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.121045 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dgzq\" (UniqueName: \"kubernetes.io/projected/08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0-kube-api-access-5dgzq\") pod \"frr-k8s-webhook-server-7fcb986d4-t2846\" (UID: \"08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.121146 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e6a8173c-9360-4880-98f8-c314de0da129-frr-startup\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.121196 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e6a8173c-9360-4880-98f8-c314de0da129-frr-sockets\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.121333 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6a8173c-9360-4880-98f8-c314de0da129-metrics-certs\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.121419 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e6a8173c-9360-4880-98f8-c314de0da129-metrics\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.121463 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e6a8173c-9360-4880-98f8-c314de0da129-frr-conf\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.121480 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e6a8173c-9360-4880-98f8-c314de0da129-reloader\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.121686 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e6a8173c-9360-4880-98f8-c314de0da129-frr-sockets\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.122302 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e6a8173c-9360-4880-98f8-c314de0da129-frr-startup\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.122392 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50c9eab8-843d-46f0-8af8-85bedeb5c0e9-cert\") pod \"controller-f8648f98b-z96st\" (UID: \"50c9eab8-843d-46f0-8af8-85bedeb5c0e9\") " pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.122495 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kqgn\" (UniqueName: \"kubernetes.io/projected/50c9eab8-843d-46f0-8af8-85bedeb5c0e9-kube-api-access-2kqgn\") pod \"controller-f8648f98b-z96st\" (UID: \"50c9eab8-843d-46f0-8af8-85bedeb5c0e9\") " pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.122575 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2gl7\" (UniqueName: \"kubernetes.io/projected/d6e279b9-33a8-48d9-9442-be75926b530c-kube-api-access-x2gl7\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.122655 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knnrg\" (UniqueName: \"kubernetes.io/projected/e6a8173c-9360-4880-98f8-c314de0da129-kube-api-access-knnrg\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.122682 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50c9eab8-843d-46f0-8af8-85bedeb5c0e9-metrics-certs\") pod \"controller-f8648f98b-z96st\" (UID: \"50c9eab8-843d-46f0-8af8-85bedeb5c0e9\") " pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.122983 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-metrics-certs\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.123084 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-t2846\" (UID: \"08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.126370 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6a8173c-9360-4880-98f8-c314de0da129-metrics-certs\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.136606 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-t2846\" (UID: \"08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.143324 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knnrg\" (UniqueName: \"kubernetes.io/projected/e6a8173c-9360-4880-98f8-c314de0da129-kube-api-access-knnrg\") pod \"frr-k8s-gxh9q\" (UID: \"e6a8173c-9360-4880-98f8-c314de0da129\") " pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.147658 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dgzq\" (UniqueName: \"kubernetes.io/projected/08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0-kube-api-access-5dgzq\") pod \"frr-k8s-webhook-server-7fcb986d4-t2846\" (UID: \"08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.205321 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.214516 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.224389 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50c9eab8-843d-46f0-8af8-85bedeb5c0e9-cert\") pod \"controller-f8648f98b-z96st\" (UID: \"50c9eab8-843d-46f0-8af8-85bedeb5c0e9\") " pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.224446 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kqgn\" (UniqueName: \"kubernetes.io/projected/50c9eab8-843d-46f0-8af8-85bedeb5c0e9-kube-api-access-2kqgn\") pod \"controller-f8648f98b-z96st\" (UID: \"50c9eab8-843d-46f0-8af8-85bedeb5c0e9\") " pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.224471 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2gl7\" (UniqueName: \"kubernetes.io/projected/d6e279b9-33a8-48d9-9442-be75926b530c-kube-api-access-x2gl7\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.224496 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50c9eab8-843d-46f0-8af8-85bedeb5c0e9-metrics-certs\") pod \"controller-f8648f98b-z96st\" (UID: \"50c9eab8-843d-46f0-8af8-85bedeb5c0e9\") " pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.224512 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-metrics-certs\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.224545 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-memberlist\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.224591 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d6e279b9-33a8-48d9-9442-be75926b530c-metallb-excludel2\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.230174 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d6e279b9-33a8-48d9-9442-be75926b530c-metallb-excludel2\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:51 crc kubenswrapper[4711]: E1202 10:27:51.230975 4711 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 10:27:51 crc kubenswrapper[4711]: E1202 10:27:51.231121 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-memberlist podName:d6e279b9-33a8-48d9-9442-be75926b530c nodeName:}" failed. No retries permitted until 2025-12-02 10:27:51.731055845 +0000 UTC m=+861.440422302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-memberlist") pod "speaker-fjkgx" (UID: "d6e279b9-33a8-48d9-9442-be75926b530c") : secret "metallb-memberlist" not found Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.231691 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50c9eab8-843d-46f0-8af8-85bedeb5c0e9-metrics-certs\") pod \"controller-f8648f98b-z96st\" (UID: \"50c9eab8-843d-46f0-8af8-85bedeb5c0e9\") " pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.231915 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-metrics-certs\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.232020 4711 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.239196 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50c9eab8-843d-46f0-8af8-85bedeb5c0e9-cert\") pod \"controller-f8648f98b-z96st\" (UID: \"50c9eab8-843d-46f0-8af8-85bedeb5c0e9\") " pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.247087 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2gl7\" (UniqueName: \"kubernetes.io/projected/d6e279b9-33a8-48d9-9442-be75926b530c-kube-api-access-x2gl7\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.249740 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kqgn\" (UniqueName: \"kubernetes.io/projected/50c9eab8-843d-46f0-8af8-85bedeb5c0e9-kube-api-access-2kqgn\") pod \"controller-f8648f98b-z96st\" (UID: \"50c9eab8-843d-46f0-8af8-85bedeb5c0e9\") " pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.360248 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:27:51 crc kubenswrapper[4711]: W1202 10:27:51.544788 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50c9eab8_843d_46f0_8af8_85bedeb5c0e9.slice/crio-c9bf5a6acd154e7ce4515c7d18ccfcfaebc7b1038c2d1e9cc5c9881f44d80601 WatchSource:0}: Error finding container c9bf5a6acd154e7ce4515c7d18ccfcfaebc7b1038c2d1e9cc5c9881f44d80601: Status 404 returned error can't find the container with id c9bf5a6acd154e7ce4515c7d18ccfcfaebc7b1038c2d1e9cc5c9881f44d80601 Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.547373 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-z96st"] Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.619143 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846"] Dec 02 10:27:51 crc kubenswrapper[4711]: W1202 10:27:51.626667 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08426bfc_3b12_4f6a_af6e_83b3bb4bf5a0.slice/crio-5530cee929efbf42aa2604a8e017af60b3b727219188489b7b7610481ecb374d WatchSource:0}: Error finding container 5530cee929efbf42aa2604a8e017af60b3b727219188489b7b7610481ecb374d: Status 404 returned error can't find the container with id 5530cee929efbf42aa2604a8e017af60b3b727219188489b7b7610481ecb374d Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.646442 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" event={"ID":"08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0","Type":"ContainerStarted","Data":"5530cee929efbf42aa2604a8e017af60b3b727219188489b7b7610481ecb374d"} Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.648346 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-z96st" event={"ID":"50c9eab8-843d-46f0-8af8-85bedeb5c0e9","Type":"ContainerStarted","Data":"c9bf5a6acd154e7ce4515c7d18ccfcfaebc7b1038c2d1e9cc5c9881f44d80601"} Dec 02 10:27:51 crc kubenswrapper[4711]: I1202 10:27:51.830724 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-memberlist\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:51 crc kubenswrapper[4711]: E1202 10:27:51.830972 4711 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 10:27:51 crc kubenswrapper[4711]: E1202 10:27:51.831771 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-memberlist podName:d6e279b9-33a8-48d9-9442-be75926b530c nodeName:}" failed. No retries permitted until 2025-12-02 10:27:52.831734548 +0000 UTC m=+862.541101005 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-memberlist") pod "speaker-fjkgx" (UID: "d6e279b9-33a8-48d9-9442-be75926b530c") : secret "metallb-memberlist" not found Dec 02 10:27:52 crc kubenswrapper[4711]: I1202 10:27:52.845802 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-memberlist\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:52 crc kubenswrapper[4711]: E1202 10:27:52.846054 4711 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 10:27:52 crc kubenswrapper[4711]: E1202 10:27:52.846150 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-memberlist podName:d6e279b9-33a8-48d9-9442-be75926b530c nodeName:}" failed. No retries permitted until 2025-12-02 10:27:54.846099006 +0000 UTC m=+864.555465463 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-memberlist") pod "speaker-fjkgx" (UID: "d6e279b9-33a8-48d9-9442-be75926b530c") : secret "metallb-memberlist" not found Dec 02 10:27:53 crc kubenswrapper[4711]: I1202 10:27:53.667819 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gxh9q" event={"ID":"e6a8173c-9360-4880-98f8-c314de0da129","Type":"ContainerStarted","Data":"5a5877b46bb89f8fd661cd1c1d77bd8e0004c49cb34550e82fd43d6482be170c"} Dec 02 10:27:53 crc kubenswrapper[4711]: I1202 10:27:53.669288 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-z96st" event={"ID":"50c9eab8-843d-46f0-8af8-85bedeb5c0e9","Type":"ContainerStarted","Data":"f4651c4b6829256faa435cbb295b77a918878968fbd8119a53add7f4725ca054"} Dec 02 10:27:53 crc kubenswrapper[4711]: I1202 10:27:53.669335 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-z96st" event={"ID":"50c9eab8-843d-46f0-8af8-85bedeb5c0e9","Type":"ContainerStarted","Data":"80564adfbed5e799653dcad83c7eb77e0b27a658a2be72192585136b46be8c2f"} Dec 02 10:27:53 crc kubenswrapper[4711]: I1202 10:27:53.669458 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:27:53 crc kubenswrapper[4711]: I1202 10:27:53.693963 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-z96st" podStartSLOduration=3.69392219 podStartE2EDuration="3.69392219s" podCreationTimestamp="2025-12-02 10:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:27:53.688755337 +0000 UTC m=+863.398121794" watchObservedRunningTime="2025-12-02 10:27:53.69392219 +0000 UTC m=+863.403288637" Dec 02 10:27:54 crc kubenswrapper[4711]: I1202 10:27:54.871863 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-memberlist\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:54 crc kubenswrapper[4711]: I1202 10:27:54.900190 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d6e279b9-33a8-48d9-9442-be75926b530c-memberlist\") pod \"speaker-fjkgx\" (UID: \"d6e279b9-33a8-48d9-9442-be75926b530c\") " pod="metallb-system/speaker-fjkgx" Dec 02 10:27:55 crc kubenswrapper[4711]: I1202 10:27:55.190448 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fjkgx" Dec 02 10:27:55 crc kubenswrapper[4711]: W1202 10:27:55.238984 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6e279b9_33a8_48d9_9442_be75926b530c.slice/crio-58e5b10a4e8892cda9539426e5ed75985d3bdca7e3bea4158678ab379bc1be32 WatchSource:0}: Error finding container 58e5b10a4e8892cda9539426e5ed75985d3bdca7e3bea4158678ab379bc1be32: Status 404 returned error can't find the container with id 58e5b10a4e8892cda9539426e5ed75985d3bdca7e3bea4158678ab379bc1be32 Dec 02 10:27:55 crc kubenswrapper[4711]: I1202 10:27:55.685175 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fjkgx" event={"ID":"d6e279b9-33a8-48d9-9442-be75926b530c","Type":"ContainerStarted","Data":"71ff3f5c464537649f5c86aaa43c7146550490c21b537b519abf10d8d492f29f"} Dec 02 10:27:55 crc kubenswrapper[4711]: I1202 10:27:55.685563 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fjkgx" event={"ID":"d6e279b9-33a8-48d9-9442-be75926b530c","Type":"ContainerStarted","Data":"58e5b10a4e8892cda9539426e5ed75985d3bdca7e3bea4158678ab379bc1be32"} Dec 02 10:27:56 crc kubenswrapper[4711]: I1202 10:27:56.692885 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fjkgx" event={"ID":"d6e279b9-33a8-48d9-9442-be75926b530c","Type":"ContainerStarted","Data":"e281fa4fad68680beee63ba8032de7dd1ee2dd307ed852d266ffb79da51ab282"} Dec 02 10:27:56 crc kubenswrapper[4711]: I1202 10:27:56.693269 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fjkgx" Dec 02 10:28:00 crc kubenswrapper[4711]: I1202 10:28:00.730318 4711 generic.go:334] "Generic (PLEG): container finished" podID="e6a8173c-9360-4880-98f8-c314de0da129" containerID="e53bc8a3126d3d6849dd182333f8344f5f939c5e544862717fde707d1ad53b75" exitCode=0 Dec 02 10:28:00 crc kubenswrapper[4711]: I1202 10:28:00.730704 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gxh9q" event={"ID":"e6a8173c-9360-4880-98f8-c314de0da129","Type":"ContainerDied","Data":"e53bc8a3126d3d6849dd182333f8344f5f939c5e544862717fde707d1ad53b75"} Dec 02 10:28:00 crc kubenswrapper[4711]: I1202 10:28:00.733506 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" event={"ID":"08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0","Type":"ContainerStarted","Data":"9c7e8d65be76471ce1abe1d70777746d82e132de7e71d5a2e3f66e19b77d4e1f"} Dec 02 10:28:00 crc kubenswrapper[4711]: I1202 10:28:00.733699 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" Dec 02 10:28:00 crc kubenswrapper[4711]: I1202 10:28:00.770253 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fjkgx" podStartSLOduration=10.770235045 podStartE2EDuration="10.770235045s" podCreationTimestamp="2025-12-02 10:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:27:56.719846074 +0000 UTC m=+866.429212521" watchObservedRunningTime="2025-12-02 10:28:00.770235045 +0000 UTC m=+870.479601492" Dec 02 10:28:00 crc kubenswrapper[4711]: I1202 10:28:00.792035 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" podStartSLOduration=2.37159556 podStartE2EDuration="10.792002423s" podCreationTimestamp="2025-12-02 10:27:50 +0000 UTC" firstStartedPulling="2025-12-02 10:27:51.629036209 +0000 UTC m=+861.338402666" lastFinishedPulling="2025-12-02 10:28:00.049443072 +0000 UTC m=+869.758809529" observedRunningTime="2025-12-02 10:28:00.788574049 +0000 UTC m=+870.497940566" watchObservedRunningTime="2025-12-02 10:28:00.792002423 +0000 UTC m=+870.501368900" Dec 02 10:28:01 crc kubenswrapper[4711]: I1202 10:28:01.742887 4711 generic.go:334] "Generic (PLEG): container finished" podID="e6a8173c-9360-4880-98f8-c314de0da129" containerID="5116889f67697aa67dbd25c91cefe5994651395845b6cda5892040c49328033c" exitCode=0 Dec 02 10:28:01 crc kubenswrapper[4711]: I1202 10:28:01.742969 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gxh9q" event={"ID":"e6a8173c-9360-4880-98f8-c314de0da129","Type":"ContainerDied","Data":"5116889f67697aa67dbd25c91cefe5994651395845b6cda5892040c49328033c"} Dec 02 10:28:02 crc kubenswrapper[4711]: I1202 10:28:02.755924 4711 generic.go:334] "Generic (PLEG): container finished" podID="e6a8173c-9360-4880-98f8-c314de0da129" containerID="ed394fd8636d8876470b49f7f528f7fd0ce5332f1aeb43711eb562bc668ff41b" exitCode=0 Dec 02 10:28:02 crc kubenswrapper[4711]: I1202 10:28:02.756024 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gxh9q" event={"ID":"e6a8173c-9360-4880-98f8-c314de0da129","Type":"ContainerDied","Data":"ed394fd8636d8876470b49f7f528f7fd0ce5332f1aeb43711eb562bc668ff41b"} Dec 02 10:28:03 crc kubenswrapper[4711]: I1202 10:28:03.767635 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gxh9q" event={"ID":"e6a8173c-9360-4880-98f8-c314de0da129","Type":"ContainerStarted","Data":"3d84db802fd4f807036af9dfcade8d337b28efd62a0ff63d3417b2c48b97c9fe"} Dec 02 10:28:03 crc kubenswrapper[4711]: I1202 10:28:03.768237 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gxh9q" event={"ID":"e6a8173c-9360-4880-98f8-c314de0da129","Type":"ContainerStarted","Data":"88d4dc20ff018935af83a50bc45a629470d29897fcb010a24042aa539b536cdb"} Dec 02 10:28:03 crc kubenswrapper[4711]: I1202 10:28:03.768253 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gxh9q" event={"ID":"e6a8173c-9360-4880-98f8-c314de0da129","Type":"ContainerStarted","Data":"074205229a7dc5150a718dec5fa9c3794af79bb6c40cf9067ce2213da69ebd76"} Dec 02 10:28:03 crc kubenswrapper[4711]: I1202 10:28:03.768263 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gxh9q" event={"ID":"e6a8173c-9360-4880-98f8-c314de0da129","Type":"ContainerStarted","Data":"caf427ad9eac0f3287613bdd0bdf7a48dd31c2a6e5668ab238a29891e6553411"} Dec 02 10:28:03 crc kubenswrapper[4711]: I1202 10:28:03.768294 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gxh9q" event={"ID":"e6a8173c-9360-4880-98f8-c314de0da129","Type":"ContainerStarted","Data":"a9d2486d9e61d329d218121ad224ad5ec50a2ae03a157d65d7dc0d356f56c7b1"} Dec 02 10:28:04 crc kubenswrapper[4711]: I1202 10:28:04.780013 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gxh9q" event={"ID":"e6a8173c-9360-4880-98f8-c314de0da129","Type":"ContainerStarted","Data":"b3c4911edc5b7a6a6d63493384728f7be80ae1fe764aa020f5db5d93a1c29d42"} Dec 02 10:28:04 crc kubenswrapper[4711]: I1202 10:28:04.780277 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:28:04 crc kubenswrapper[4711]: I1202 10:28:04.815732 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-gxh9q" podStartSLOduration=8.155530129 podStartE2EDuration="14.815705531s" podCreationTimestamp="2025-12-02 10:27:50 +0000 UTC" firstStartedPulling="2025-12-02 10:27:53.372579671 +0000 UTC m=+863.081946128" lastFinishedPulling="2025-12-02 10:28:00.032755083 +0000 UTC m=+869.742121530" observedRunningTime="2025-12-02 10:28:04.811799193 +0000 UTC m=+874.521165650" watchObservedRunningTime="2025-12-02 10:28:04.815705531 +0000 UTC m=+874.525071988" Dec 02 10:28:05 crc kubenswrapper[4711]: I1202 10:28:05.195729 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fjkgx" Dec 02 10:28:06 crc kubenswrapper[4711]: I1202 10:28:06.205564 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:28:06 crc kubenswrapper[4711]: I1202 10:28:06.242393 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:28:08 crc kubenswrapper[4711]: I1202 10:28:08.355189 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jl8sd"] Dec 02 10:28:08 crc kubenswrapper[4711]: I1202 10:28:08.356729 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jl8sd" Dec 02 10:28:08 crc kubenswrapper[4711]: I1202 10:28:08.389382 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jl8sd"] Dec 02 10:28:08 crc kubenswrapper[4711]: I1202 10:28:08.389970 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-pd28q" Dec 02 10:28:08 crc kubenswrapper[4711]: I1202 10:28:08.390377 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 02 10:28:08 crc kubenswrapper[4711]: I1202 10:28:08.390524 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 02 10:28:08 crc kubenswrapper[4711]: I1202 10:28:08.507318 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8hfl\" (UniqueName: \"kubernetes.io/projected/69c0d4d9-5212-4d49-870d-4caf43c6eead-kube-api-access-w8hfl\") pod \"openstack-operator-index-jl8sd\" (UID: \"69c0d4d9-5212-4d49-870d-4caf43c6eead\") " pod="openstack-operators/openstack-operator-index-jl8sd" Dec 02 10:28:08 crc kubenswrapper[4711]: I1202 10:28:08.609069 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8hfl\" (UniqueName: \"kubernetes.io/projected/69c0d4d9-5212-4d49-870d-4caf43c6eead-kube-api-access-w8hfl\") pod \"openstack-operator-index-jl8sd\" (UID: \"69c0d4d9-5212-4d49-870d-4caf43c6eead\") " pod="openstack-operators/openstack-operator-index-jl8sd" Dec 02 10:28:08 crc kubenswrapper[4711]: I1202 10:28:08.629066 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8hfl\" (UniqueName: \"kubernetes.io/projected/69c0d4d9-5212-4d49-870d-4caf43c6eead-kube-api-access-w8hfl\") pod \"openstack-operator-index-jl8sd\" (UID: \"69c0d4d9-5212-4d49-870d-4caf43c6eead\") " pod="openstack-operators/openstack-operator-index-jl8sd" Dec 02 10:28:08 crc kubenswrapper[4711]: I1202 10:28:08.728395 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jl8sd" Dec 02 10:28:08 crc kubenswrapper[4711]: I1202 10:28:08.998347 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jl8sd"] Dec 02 10:28:09 crc kubenswrapper[4711]: I1202 10:28:09.825611 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jl8sd" event={"ID":"69c0d4d9-5212-4d49-870d-4caf43c6eead","Type":"ContainerStarted","Data":"4daaaf12f7c225d39714e0f66640f26b26bbad6d3cbce462caec5dbb7d015762"} Dec 02 10:28:11 crc kubenswrapper[4711]: I1202 10:28:11.218906 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-t2846" Dec 02 10:28:11 crc kubenswrapper[4711]: I1202 10:28:11.363808 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-z96st" Dec 02 10:28:11 crc kubenswrapper[4711]: I1202 10:28:11.719167 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jl8sd"] Dec 02 10:28:11 crc kubenswrapper[4711]: I1202 10:28:11.840046 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jl8sd" event={"ID":"69c0d4d9-5212-4d49-870d-4caf43c6eead","Type":"ContainerStarted","Data":"bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6"} Dec 02 10:28:11 crc kubenswrapper[4711]: I1202 10:28:11.840267 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-jl8sd" podUID="69c0d4d9-5212-4d49-870d-4caf43c6eead" containerName="registry-server" containerID="cri-o://bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6" gracePeriod=2 Dec 02 10:28:11 crc kubenswrapper[4711]: I1202 10:28:11.863267 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jl8sd" podStartSLOduration=1.286025298 podStartE2EDuration="3.863199774s" podCreationTimestamp="2025-12-02 10:28:08 +0000 UTC" firstStartedPulling="2025-12-02 10:28:09.018114208 +0000 UTC m=+878.727480665" lastFinishedPulling="2025-12-02 10:28:11.595288694 +0000 UTC m=+881.304655141" observedRunningTime="2025-12-02 10:28:11.856311646 +0000 UTC m=+881.565678103" watchObservedRunningTime="2025-12-02 10:28:11.863199774 +0000 UTC m=+881.572566251" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.258398 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jl8sd" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.333188 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7dwdr"] Dec 02 10:28:12 crc kubenswrapper[4711]: E1202 10:28:12.333482 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c0d4d9-5212-4d49-870d-4caf43c6eead" containerName="registry-server" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.333512 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c0d4d9-5212-4d49-870d-4caf43c6eead" containerName="registry-server" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.333624 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c0d4d9-5212-4d49-870d-4caf43c6eead" containerName="registry-server" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.334324 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7dwdr" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.348093 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7dwdr"] Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.422417 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8hfl\" (UniqueName: \"kubernetes.io/projected/69c0d4d9-5212-4d49-870d-4caf43c6eead-kube-api-access-w8hfl\") pod \"69c0d4d9-5212-4d49-870d-4caf43c6eead\" (UID: \"69c0d4d9-5212-4d49-870d-4caf43c6eead\") " Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.430340 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c0d4d9-5212-4d49-870d-4caf43c6eead-kube-api-access-w8hfl" (OuterVolumeSpecName: "kube-api-access-w8hfl") pod "69c0d4d9-5212-4d49-870d-4caf43c6eead" (UID: "69c0d4d9-5212-4d49-870d-4caf43c6eead"). InnerVolumeSpecName "kube-api-access-w8hfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.523902 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj9cv\" (UniqueName: \"kubernetes.io/projected/1d175158-9d08-4b61-87b8-c9054e78d6aa-kube-api-access-kj9cv\") pod \"openstack-operator-index-7dwdr\" (UID: \"1d175158-9d08-4b61-87b8-c9054e78d6aa\") " pod="openstack-operators/openstack-operator-index-7dwdr" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.524342 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8hfl\" (UniqueName: \"kubernetes.io/projected/69c0d4d9-5212-4d49-870d-4caf43c6eead-kube-api-access-w8hfl\") on node \"crc\" DevicePath \"\"" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.625711 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj9cv\" (UniqueName: \"kubernetes.io/projected/1d175158-9d08-4b61-87b8-c9054e78d6aa-kube-api-access-kj9cv\") pod \"openstack-operator-index-7dwdr\" (UID: \"1d175158-9d08-4b61-87b8-c9054e78d6aa\") " pod="openstack-operators/openstack-operator-index-7dwdr" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.644239 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj9cv\" (UniqueName: \"kubernetes.io/projected/1d175158-9d08-4b61-87b8-c9054e78d6aa-kube-api-access-kj9cv\") pod \"openstack-operator-index-7dwdr\" (UID: \"1d175158-9d08-4b61-87b8-c9054e78d6aa\") " pod="openstack-operators/openstack-operator-index-7dwdr" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.654924 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7dwdr" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.855358 4711 generic.go:334] "Generic (PLEG): container finished" podID="69c0d4d9-5212-4d49-870d-4caf43c6eead" containerID="bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6" exitCode=0 Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.855408 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jl8sd" event={"ID":"69c0d4d9-5212-4d49-870d-4caf43c6eead","Type":"ContainerDied","Data":"bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6"} Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.855441 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jl8sd" event={"ID":"69c0d4d9-5212-4d49-870d-4caf43c6eead","Type":"ContainerDied","Data":"4daaaf12f7c225d39714e0f66640f26b26bbad6d3cbce462caec5dbb7d015762"} Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.855475 4711 scope.go:117] "RemoveContainer" containerID="bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.855506 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jl8sd" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.874019 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7dwdr"] Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.878017 4711 scope.go:117] "RemoveContainer" containerID="bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6" Dec 02 10:28:12 crc kubenswrapper[4711]: E1202 10:28:12.879576 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6\": container with ID starting with bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6 not found: ID does not exist" containerID="bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6" Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.879734 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6"} err="failed to get container status \"bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6\": rpc error: code = NotFound desc = could not find container \"bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6\": container with ID starting with bbf8fe490f1a065afb1811ecac564ec49d6850f25fd6aa95894d60251148e7b6 not found: ID does not exist" Dec 02 10:28:12 crc kubenswrapper[4711]: W1202 10:28:12.884402 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d175158_9d08_4b61_87b8_c9054e78d6aa.slice/crio-adfc126dec58904d53dfbb0f81cf729affa808c7b2f1853fdf55ecbcc41eea59 WatchSource:0}: Error finding container adfc126dec58904d53dfbb0f81cf729affa808c7b2f1853fdf55ecbcc41eea59: Status 404 returned error can't find the container with id adfc126dec58904d53dfbb0f81cf729affa808c7b2f1853fdf55ecbcc41eea59 Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.925620 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jl8sd"] Dec 02 10:28:12 crc kubenswrapper[4711]: I1202 10:28:12.930399 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-jl8sd"] Dec 02 10:28:13 crc kubenswrapper[4711]: I1202 10:28:13.086286 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c0d4d9-5212-4d49-870d-4caf43c6eead" path="/var/lib/kubelet/pods/69c0d4d9-5212-4d49-870d-4caf43c6eead/volumes" Dec 02 10:28:13 crc kubenswrapper[4711]: I1202 10:28:13.870592 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7dwdr" event={"ID":"1d175158-9d08-4b61-87b8-c9054e78d6aa","Type":"ContainerStarted","Data":"aac6aa8b02d2d66f9b3291ce8d45cf533c096dafdbe926ec6f9a9abc0c7a92ad"} Dec 02 10:28:13 crc kubenswrapper[4711]: I1202 10:28:13.871146 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7dwdr" event={"ID":"1d175158-9d08-4b61-87b8-c9054e78d6aa","Type":"ContainerStarted","Data":"adfc126dec58904d53dfbb0f81cf729affa808c7b2f1853fdf55ecbcc41eea59"} Dec 02 10:28:13 crc kubenswrapper[4711]: I1202 10:28:13.898884 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7dwdr" podStartSLOduration=1.2323306 podStartE2EDuration="1.89877976s" podCreationTimestamp="2025-12-02 10:28:12 +0000 UTC" firstStartedPulling="2025-12-02 10:28:12.888667989 +0000 UTC m=+882.598034436" lastFinishedPulling="2025-12-02 10:28:13.555117119 +0000 UTC m=+883.264483596" observedRunningTime="2025-12-02 10:28:13.893564008 +0000 UTC m=+883.602930535" watchObservedRunningTime="2025-12-02 10:28:13.89877976 +0000 UTC m=+883.608146247" Dec 02 10:28:21 crc kubenswrapper[4711]: I1202 10:28:21.209811 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gxh9q" Dec 02 10:28:22 crc kubenswrapper[4711]: I1202 10:28:22.655110 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7dwdr" Dec 02 10:28:22 crc kubenswrapper[4711]: I1202 10:28:22.655182 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7dwdr" Dec 02 10:28:22 crc kubenswrapper[4711]: I1202 10:28:22.698014 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7dwdr" Dec 02 10:28:22 crc kubenswrapper[4711]: I1202 10:28:22.981145 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7dwdr" Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.238889 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz"] Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.241538 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.244563 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f7fnb" Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.263413 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz"] Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.390944 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxmmp\" (UniqueName: \"kubernetes.io/projected/01f223a8-15fc-4798-9fed-4f1624424d95-kube-api-access-mxmmp\") pod \"09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz\" (UID: \"01f223a8-15fc-4798-9fed-4f1624424d95\") " pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.391267 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01f223a8-15fc-4798-9fed-4f1624424d95-util\") pod \"09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz\" (UID: \"01f223a8-15fc-4798-9fed-4f1624424d95\") " pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.391360 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01f223a8-15fc-4798-9fed-4f1624424d95-bundle\") pod \"09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz\" (UID: \"01f223a8-15fc-4798-9fed-4f1624424d95\") " pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.492232 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxmmp\" (UniqueName: \"kubernetes.io/projected/01f223a8-15fc-4798-9fed-4f1624424d95-kube-api-access-mxmmp\") pod \"09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz\" (UID: \"01f223a8-15fc-4798-9fed-4f1624424d95\") " pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.492622 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01f223a8-15fc-4798-9fed-4f1624424d95-util\") pod \"09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz\" (UID: \"01f223a8-15fc-4798-9fed-4f1624424d95\") " pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.492767 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01f223a8-15fc-4798-9fed-4f1624424d95-bundle\") pod \"09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz\" (UID: \"01f223a8-15fc-4798-9fed-4f1624424d95\") " pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.493241 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01f223a8-15fc-4798-9fed-4f1624424d95-util\") pod \"09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz\" (UID: \"01f223a8-15fc-4798-9fed-4f1624424d95\") " pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.493401 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01f223a8-15fc-4798-9fed-4f1624424d95-bundle\") pod \"09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz\" (UID: \"01f223a8-15fc-4798-9fed-4f1624424d95\") " pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.526650 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxmmp\" (UniqueName: \"kubernetes.io/projected/01f223a8-15fc-4798-9fed-4f1624424d95-kube-api-access-mxmmp\") pod \"09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz\" (UID: \"01f223a8-15fc-4798-9fed-4f1624424d95\") " pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:29 crc kubenswrapper[4711]: I1202 10:28:29.571789 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:30 crc kubenswrapper[4711]: I1202 10:28:30.177511 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz"] Dec 02 10:28:30 crc kubenswrapper[4711]: W1202 10:28:30.182384 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01f223a8_15fc_4798_9fed_4f1624424d95.slice/crio-3707580a0fb792e5fd734b483904058a609d4bc9aa78b9603e190053692a03e4 WatchSource:0}: Error finding container 3707580a0fb792e5fd734b483904058a609d4bc9aa78b9603e190053692a03e4: Status 404 returned error can't find the container with id 3707580a0fb792e5fd734b483904058a609d4bc9aa78b9603e190053692a03e4 Dec 02 10:28:31 crc kubenswrapper[4711]: I1202 10:28:31.017654 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" event={"ID":"01f223a8-15fc-4798-9fed-4f1624424d95","Type":"ContainerStarted","Data":"3707580a0fb792e5fd734b483904058a609d4bc9aa78b9603e190053692a03e4"} Dec 02 10:28:32 crc kubenswrapper[4711]: I1202 10:28:32.028413 4711 generic.go:334] "Generic (PLEG): container finished" podID="01f223a8-15fc-4798-9fed-4f1624424d95" containerID="dfebcd66ec77c7da7fdf4179b86053a587604b28636181f228857678dc3f820c" exitCode=0 Dec 02 10:28:32 crc kubenswrapper[4711]: I1202 10:28:32.028479 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" event={"ID":"01f223a8-15fc-4798-9fed-4f1624424d95","Type":"ContainerDied","Data":"dfebcd66ec77c7da7fdf4179b86053a587604b28636181f228857678dc3f820c"} Dec 02 10:28:34 crc kubenswrapper[4711]: I1202 10:28:34.048824 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" event={"ID":"01f223a8-15fc-4798-9fed-4f1624424d95","Type":"ContainerStarted","Data":"c560fda2946c77d8df81b191e2dc62bb8e2955989dddbb9ebd0c97c4bceb6711"} Dec 02 10:28:35 crc kubenswrapper[4711]: I1202 10:28:35.058317 4711 generic.go:334] "Generic (PLEG): container finished" podID="01f223a8-15fc-4798-9fed-4f1624424d95" containerID="c560fda2946c77d8df81b191e2dc62bb8e2955989dddbb9ebd0c97c4bceb6711" exitCode=0 Dec 02 10:28:35 crc kubenswrapper[4711]: I1202 10:28:35.058370 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" event={"ID":"01f223a8-15fc-4798-9fed-4f1624424d95","Type":"ContainerDied","Data":"c560fda2946c77d8df81b191e2dc62bb8e2955989dddbb9ebd0c97c4bceb6711"} Dec 02 10:28:36 crc kubenswrapper[4711]: I1202 10:28:36.068047 4711 generic.go:334] "Generic (PLEG): container finished" podID="01f223a8-15fc-4798-9fed-4f1624424d95" containerID="509cd8e33a3421f4b7035de677a2f0e62f5efb08297d68c85a82a3544c88ebf6" exitCode=0 Dec 02 10:28:36 crc kubenswrapper[4711]: I1202 10:28:36.068129 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" event={"ID":"01f223a8-15fc-4798-9fed-4f1624424d95","Type":"ContainerDied","Data":"509cd8e33a3421f4b7035de677a2f0e62f5efb08297d68c85a82a3544c88ebf6"} Dec 02 10:28:37 crc kubenswrapper[4711]: I1202 10:28:37.387647 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:37 crc kubenswrapper[4711]: I1202 10:28:37.548565 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxmmp\" (UniqueName: \"kubernetes.io/projected/01f223a8-15fc-4798-9fed-4f1624424d95-kube-api-access-mxmmp\") pod \"01f223a8-15fc-4798-9fed-4f1624424d95\" (UID: \"01f223a8-15fc-4798-9fed-4f1624424d95\") " Dec 02 10:28:37 crc kubenswrapper[4711]: I1202 10:28:37.548648 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01f223a8-15fc-4798-9fed-4f1624424d95-bundle\") pod \"01f223a8-15fc-4798-9fed-4f1624424d95\" (UID: \"01f223a8-15fc-4798-9fed-4f1624424d95\") " Dec 02 10:28:37 crc kubenswrapper[4711]: I1202 10:28:37.548764 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01f223a8-15fc-4798-9fed-4f1624424d95-util\") pod \"01f223a8-15fc-4798-9fed-4f1624424d95\" (UID: \"01f223a8-15fc-4798-9fed-4f1624424d95\") " Dec 02 10:28:37 crc kubenswrapper[4711]: I1202 10:28:37.549723 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f223a8-15fc-4798-9fed-4f1624424d95-bundle" (OuterVolumeSpecName: "bundle") pod "01f223a8-15fc-4798-9fed-4f1624424d95" (UID: "01f223a8-15fc-4798-9fed-4f1624424d95"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:28:37 crc kubenswrapper[4711]: I1202 10:28:37.557425 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f223a8-15fc-4798-9fed-4f1624424d95-kube-api-access-mxmmp" (OuterVolumeSpecName: "kube-api-access-mxmmp") pod "01f223a8-15fc-4798-9fed-4f1624424d95" (UID: "01f223a8-15fc-4798-9fed-4f1624424d95"). InnerVolumeSpecName "kube-api-access-mxmmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:28:37 crc kubenswrapper[4711]: I1202 10:28:37.559340 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f223a8-15fc-4798-9fed-4f1624424d95-util" (OuterVolumeSpecName: "util") pod "01f223a8-15fc-4798-9fed-4f1624424d95" (UID: "01f223a8-15fc-4798-9fed-4f1624424d95"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:28:37 crc kubenswrapper[4711]: I1202 10:28:37.650026 4711 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01f223a8-15fc-4798-9fed-4f1624424d95-util\") on node \"crc\" DevicePath \"\"" Dec 02 10:28:37 crc kubenswrapper[4711]: I1202 10:28:37.650083 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxmmp\" (UniqueName: \"kubernetes.io/projected/01f223a8-15fc-4798-9fed-4f1624424d95-kube-api-access-mxmmp\") on node \"crc\" DevicePath \"\"" Dec 02 10:28:37 crc kubenswrapper[4711]: I1202 10:28:37.650105 4711 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01f223a8-15fc-4798-9fed-4f1624424d95-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:28:38 crc kubenswrapper[4711]: I1202 10:28:38.087905 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" event={"ID":"01f223a8-15fc-4798-9fed-4f1624424d95","Type":"ContainerDied","Data":"3707580a0fb792e5fd734b483904058a609d4bc9aa78b9603e190053692a03e4"} Dec 02 10:28:38 crc kubenswrapper[4711]: I1202 10:28:38.088248 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3707580a0fb792e5fd734b483904058a609d4bc9aa78b9603e190053692a03e4" Dec 02 10:28:38 crc kubenswrapper[4711]: I1202 10:28:38.088005 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz" Dec 02 10:28:41 crc kubenswrapper[4711]: I1202 10:28:41.297402 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw"] Dec 02 10:28:41 crc kubenswrapper[4711]: E1202 10:28:41.297868 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f223a8-15fc-4798-9fed-4f1624424d95" containerName="util" Dec 02 10:28:41 crc kubenswrapper[4711]: I1202 10:28:41.297879 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f223a8-15fc-4798-9fed-4f1624424d95" containerName="util" Dec 02 10:28:41 crc kubenswrapper[4711]: E1202 10:28:41.297886 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f223a8-15fc-4798-9fed-4f1624424d95" containerName="extract" Dec 02 10:28:41 crc kubenswrapper[4711]: I1202 10:28:41.297892 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f223a8-15fc-4798-9fed-4f1624424d95" containerName="extract" Dec 02 10:28:41 crc kubenswrapper[4711]: E1202 10:28:41.297914 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f223a8-15fc-4798-9fed-4f1624424d95" containerName="pull" Dec 02 10:28:41 crc kubenswrapper[4711]: I1202 10:28:41.297920 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f223a8-15fc-4798-9fed-4f1624424d95" containerName="pull" Dec 02 10:28:41 crc kubenswrapper[4711]: I1202 10:28:41.298034 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f223a8-15fc-4798-9fed-4f1624424d95" containerName="extract" Dec 02 10:28:41 crc kubenswrapper[4711]: I1202 10:28:41.298394 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw" Dec 02 10:28:41 crc kubenswrapper[4711]: I1202 10:28:41.302916 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-9ht6w" Dec 02 10:28:41 crc kubenswrapper[4711]: I1202 10:28:41.319366 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw"] Dec 02 10:28:41 crc kubenswrapper[4711]: I1202 10:28:41.324924 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7cf\" (UniqueName: \"kubernetes.io/projected/f105bf88-39ba-4e14-8741-1c3a0d759f63-kube-api-access-4w7cf\") pod \"openstack-operator-controller-operator-d77745b8c-lf4vw\" (UID: \"f105bf88-39ba-4e14-8741-1c3a0d759f63\") " pod="openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw" Dec 02 10:28:41 crc kubenswrapper[4711]: I1202 10:28:41.426190 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w7cf\" (UniqueName: \"kubernetes.io/projected/f105bf88-39ba-4e14-8741-1c3a0d759f63-kube-api-access-4w7cf\") pod \"openstack-operator-controller-operator-d77745b8c-lf4vw\" (UID: \"f105bf88-39ba-4e14-8741-1c3a0d759f63\") " pod="openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw" Dec 02 10:28:41 crc kubenswrapper[4711]: I1202 10:28:41.456401 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w7cf\" (UniqueName: \"kubernetes.io/projected/f105bf88-39ba-4e14-8741-1c3a0d759f63-kube-api-access-4w7cf\") pod \"openstack-operator-controller-operator-d77745b8c-lf4vw\" (UID: \"f105bf88-39ba-4e14-8741-1c3a0d759f63\") " pod="openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw" Dec 02 10:28:41 crc kubenswrapper[4711]: I1202 10:28:41.614725 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw" Dec 02 10:28:42 crc kubenswrapper[4711]: I1202 10:28:42.085633 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw"] Dec 02 10:28:42 crc kubenswrapper[4711]: W1202 10:28:42.091044 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf105bf88_39ba_4e14_8741_1c3a0d759f63.slice/crio-3ae897d81c2816f18d3857bcc5baaaa9ed228ed9905820b6f4939304a140011d WatchSource:0}: Error finding container 3ae897d81c2816f18d3857bcc5baaaa9ed228ed9905820b6f4939304a140011d: Status 404 returned error can't find the container with id 3ae897d81c2816f18d3857bcc5baaaa9ed228ed9905820b6f4939304a140011d Dec 02 10:28:42 crc kubenswrapper[4711]: I1202 10:28:42.123862 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw" event={"ID":"f105bf88-39ba-4e14-8741-1c3a0d759f63","Type":"ContainerStarted","Data":"3ae897d81c2816f18d3857bcc5baaaa9ed228ed9905820b6f4939304a140011d"} Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.162759 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw" event={"ID":"f105bf88-39ba-4e14-8741-1c3a0d759f63","Type":"ContainerStarted","Data":"fe6e1f6c4ccc232bba7fe20060774415ffacd8ff5983ca12d3fb3f2bac3ef59a"} Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.163225 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw" Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.219986 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw" podStartSLOduration=2.195576086 podStartE2EDuration="7.219912396s" podCreationTimestamp="2025-12-02 10:28:41 +0000 UTC" firstStartedPulling="2025-12-02 10:28:42.093689042 +0000 UTC m=+911.803055519" lastFinishedPulling="2025-12-02 10:28:47.118025382 +0000 UTC m=+916.827391829" observedRunningTime="2025-12-02 10:28:48.211817773 +0000 UTC m=+917.921184250" watchObservedRunningTime="2025-12-02 10:28:48.219912396 +0000 UTC m=+917.929278883" Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.538099 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vfcx6"] Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.540438 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.556860 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vfcx6"] Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.645916 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e0d7c2-8395-406b-a7d8-86cabd810438-utilities\") pod \"community-operators-vfcx6\" (UID: \"37e0d7c2-8395-406b-a7d8-86cabd810438\") " pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.646004 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7vpk\" (UniqueName: \"kubernetes.io/projected/37e0d7c2-8395-406b-a7d8-86cabd810438-kube-api-access-r7vpk\") pod \"community-operators-vfcx6\" (UID: \"37e0d7c2-8395-406b-a7d8-86cabd810438\") " pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.646061 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e0d7c2-8395-406b-a7d8-86cabd810438-catalog-content\") pod \"community-operators-vfcx6\" (UID: \"37e0d7c2-8395-406b-a7d8-86cabd810438\") " pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.748163 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e0d7c2-8395-406b-a7d8-86cabd810438-utilities\") pod \"community-operators-vfcx6\" (UID: \"37e0d7c2-8395-406b-a7d8-86cabd810438\") " pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.748386 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7vpk\" (UniqueName: \"kubernetes.io/projected/37e0d7c2-8395-406b-a7d8-86cabd810438-kube-api-access-r7vpk\") pod \"community-operators-vfcx6\" (UID: \"37e0d7c2-8395-406b-a7d8-86cabd810438\") " pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.749179 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e0d7c2-8395-406b-a7d8-86cabd810438-catalog-content\") pod \"community-operators-vfcx6\" (UID: \"37e0d7c2-8395-406b-a7d8-86cabd810438\") " pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.749270 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e0d7c2-8395-406b-a7d8-86cabd810438-utilities\") pod \"community-operators-vfcx6\" (UID: \"37e0d7c2-8395-406b-a7d8-86cabd810438\") " pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.749833 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e0d7c2-8395-406b-a7d8-86cabd810438-catalog-content\") pod \"community-operators-vfcx6\" (UID: \"37e0d7c2-8395-406b-a7d8-86cabd810438\") " pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.771205 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7vpk\" (UniqueName: \"kubernetes.io/projected/37e0d7c2-8395-406b-a7d8-86cabd810438-kube-api-access-r7vpk\") pod \"community-operators-vfcx6\" (UID: \"37e0d7c2-8395-406b-a7d8-86cabd810438\") " pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:48 crc kubenswrapper[4711]: I1202 10:28:48.864614 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:49 crc kubenswrapper[4711]: I1202 10:28:49.337236 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vfcx6"] Dec 02 10:28:49 crc kubenswrapper[4711]: W1202 10:28:49.349098 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37e0d7c2_8395_406b_a7d8_86cabd810438.slice/crio-091e13760946e8f07d415440efb7a25c1b2a94e5517364207d1a5817c35b6a65 WatchSource:0}: Error finding container 091e13760946e8f07d415440efb7a25c1b2a94e5517364207d1a5817c35b6a65: Status 404 returned error can't find the container with id 091e13760946e8f07d415440efb7a25c1b2a94e5517364207d1a5817c35b6a65 Dec 02 10:28:50 crc kubenswrapper[4711]: I1202 10:28:50.178250 4711 generic.go:334] "Generic (PLEG): container finished" podID="37e0d7c2-8395-406b-a7d8-86cabd810438" containerID="7c7dc3fdcc1e27eb61d8caf26c817dc6a8036ef4534354654d8b5b8142d49a07" exitCode=0 Dec 02 10:28:50 crc kubenswrapper[4711]: I1202 10:28:50.178378 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfcx6" event={"ID":"37e0d7c2-8395-406b-a7d8-86cabd810438","Type":"ContainerDied","Data":"7c7dc3fdcc1e27eb61d8caf26c817dc6a8036ef4534354654d8b5b8142d49a07"} Dec 02 10:28:50 crc kubenswrapper[4711]: I1202 10:28:50.178742 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfcx6" event={"ID":"37e0d7c2-8395-406b-a7d8-86cabd810438","Type":"ContainerStarted","Data":"091e13760946e8f07d415440efb7a25c1b2a94e5517364207d1a5817c35b6a65"} Dec 02 10:28:52 crc kubenswrapper[4711]: I1202 10:28:52.202039 4711 generic.go:334] "Generic (PLEG): container finished" podID="37e0d7c2-8395-406b-a7d8-86cabd810438" containerID="a01c76d56e25b28d3233c9082fd63a532b2086af89093fc14ac5c8673ca91668" exitCode=0 Dec 02 10:28:52 crc kubenswrapper[4711]: I1202 10:28:52.202105 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfcx6" event={"ID":"37e0d7c2-8395-406b-a7d8-86cabd810438","Type":"ContainerDied","Data":"a01c76d56e25b28d3233c9082fd63a532b2086af89093fc14ac5c8673ca91668"} Dec 02 10:28:52 crc kubenswrapper[4711]: I1202 10:28:52.585616 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:28:52 crc kubenswrapper[4711]: I1202 10:28:52.585763 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:28:54 crc kubenswrapper[4711]: I1202 10:28:54.222101 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfcx6" event={"ID":"37e0d7c2-8395-406b-a7d8-86cabd810438","Type":"ContainerStarted","Data":"363009d8a9b3fcf6c5becfd1188629121a004b9efbe5d5f690c12b86ceb6565d"} Dec 02 10:28:54 crc kubenswrapper[4711]: I1202 10:28:54.253589 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vfcx6" podStartSLOduration=3.414984688 podStartE2EDuration="6.253560195s" podCreationTimestamp="2025-12-02 10:28:48 +0000 UTC" firstStartedPulling="2025-12-02 10:28:50.18045071 +0000 UTC m=+919.889817197" lastFinishedPulling="2025-12-02 10:28:53.019026257 +0000 UTC m=+922.728392704" observedRunningTime="2025-12-02 10:28:54.249866384 +0000 UTC m=+923.959232831" watchObservedRunningTime="2025-12-02 10:28:54.253560195 +0000 UTC m=+923.962926682" Dec 02 10:28:58 crc kubenswrapper[4711]: I1202 10:28:58.865197 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:58 crc kubenswrapper[4711]: I1202 10:28:58.865839 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:58 crc kubenswrapper[4711]: I1202 10:28:58.927059 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:28:59 crc kubenswrapper[4711]: I1202 10:28:59.315113 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.132535 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-58l5h"] Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.133585 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.156666 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58l5h"] Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.286530 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d507f3f-edd5-4a87-9820-fd5512703b24-catalog-content\") pod \"certified-operators-58l5h\" (UID: \"0d507f3f-edd5-4a87-9820-fd5512703b24\") " pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.286584 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d507f3f-edd5-4a87-9820-fd5512703b24-utilities\") pod \"certified-operators-58l5h\" (UID: \"0d507f3f-edd5-4a87-9820-fd5512703b24\") " pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.286618 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g88s\" (UniqueName: \"kubernetes.io/projected/0d507f3f-edd5-4a87-9820-fd5512703b24-kube-api-access-2g88s\") pod \"certified-operators-58l5h\" (UID: \"0d507f3f-edd5-4a87-9820-fd5512703b24\") " pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.388607 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d507f3f-edd5-4a87-9820-fd5512703b24-catalog-content\") pod \"certified-operators-58l5h\" (UID: \"0d507f3f-edd5-4a87-9820-fd5512703b24\") " pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.388694 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d507f3f-edd5-4a87-9820-fd5512703b24-utilities\") pod \"certified-operators-58l5h\" (UID: \"0d507f3f-edd5-4a87-9820-fd5512703b24\") " pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.388738 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g88s\" (UniqueName: \"kubernetes.io/projected/0d507f3f-edd5-4a87-9820-fd5512703b24-kube-api-access-2g88s\") pod \"certified-operators-58l5h\" (UID: \"0d507f3f-edd5-4a87-9820-fd5512703b24\") " pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.389196 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d507f3f-edd5-4a87-9820-fd5512703b24-catalog-content\") pod \"certified-operators-58l5h\" (UID: \"0d507f3f-edd5-4a87-9820-fd5512703b24\") " pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.389272 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d507f3f-edd5-4a87-9820-fd5512703b24-utilities\") pod \"certified-operators-58l5h\" (UID: \"0d507f3f-edd5-4a87-9820-fd5512703b24\") " pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.414300 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g88s\" (UniqueName: \"kubernetes.io/projected/0d507f3f-edd5-4a87-9820-fd5512703b24-kube-api-access-2g88s\") pod \"certified-operators-58l5h\" (UID: \"0d507f3f-edd5-4a87-9820-fd5512703b24\") " pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.451710 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.524888 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ws4kg"] Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.526200 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.531378 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ws4kg"] Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.691758 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r97z2\" (UniqueName: \"kubernetes.io/projected/9733fd7c-4874-4823-b16b-63068e79b83d-kube-api-access-r97z2\") pod \"redhat-marketplace-ws4kg\" (UID: \"9733fd7c-4874-4823-b16b-63068e79b83d\") " pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.692101 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9733fd7c-4874-4823-b16b-63068e79b83d-catalog-content\") pod \"redhat-marketplace-ws4kg\" (UID: \"9733fd7c-4874-4823-b16b-63068e79b83d\") " pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.692130 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9733fd7c-4874-4823-b16b-63068e79b83d-utilities\") pod \"redhat-marketplace-ws4kg\" (UID: \"9733fd7c-4874-4823-b16b-63068e79b83d\") " pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.694697 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58l5h"] Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.793030 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r97z2\" (UniqueName: \"kubernetes.io/projected/9733fd7c-4874-4823-b16b-63068e79b83d-kube-api-access-r97z2\") pod \"redhat-marketplace-ws4kg\" (UID: \"9733fd7c-4874-4823-b16b-63068e79b83d\") " pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.793102 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9733fd7c-4874-4823-b16b-63068e79b83d-catalog-content\") pod \"redhat-marketplace-ws4kg\" (UID: \"9733fd7c-4874-4823-b16b-63068e79b83d\") " pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.793120 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9733fd7c-4874-4823-b16b-63068e79b83d-utilities\") pod \"redhat-marketplace-ws4kg\" (UID: \"9733fd7c-4874-4823-b16b-63068e79b83d\") " pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.793573 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9733fd7c-4874-4823-b16b-63068e79b83d-utilities\") pod \"redhat-marketplace-ws4kg\" (UID: \"9733fd7c-4874-4823-b16b-63068e79b83d\") " pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.793626 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9733fd7c-4874-4823-b16b-63068e79b83d-catalog-content\") pod \"redhat-marketplace-ws4kg\" (UID: \"9733fd7c-4874-4823-b16b-63068e79b83d\") " pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.817869 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r97z2\" (UniqueName: \"kubernetes.io/projected/9733fd7c-4874-4823-b16b-63068e79b83d-kube-api-access-r97z2\") pod \"redhat-marketplace-ws4kg\" (UID: \"9733fd7c-4874-4823-b16b-63068e79b83d\") " pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:00 crc kubenswrapper[4711]: I1202 10:29:00.850509 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:01 crc kubenswrapper[4711]: I1202 10:29:01.117291 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ws4kg"] Dec 02 10:29:01 crc kubenswrapper[4711]: W1202 10:29:01.122097 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9733fd7c_4874_4823_b16b_63068e79b83d.slice/crio-a9262bae62e7a1ffa40b86948d8574d0e3dd18be1e409b941c3d547afbbddb1e WatchSource:0}: Error finding container a9262bae62e7a1ffa40b86948d8574d0e3dd18be1e409b941c3d547afbbddb1e: Status 404 returned error can't find the container with id a9262bae62e7a1ffa40b86948d8574d0e3dd18be1e409b941c3d547afbbddb1e Dec 02 10:29:01 crc kubenswrapper[4711]: I1202 10:29:01.301595 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws4kg" event={"ID":"9733fd7c-4874-4823-b16b-63068e79b83d","Type":"ContainerStarted","Data":"a9262bae62e7a1ffa40b86948d8574d0e3dd18be1e409b941c3d547afbbddb1e"} Dec 02 10:29:01 crc kubenswrapper[4711]: I1202 10:29:01.308145 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58l5h" event={"ID":"0d507f3f-edd5-4a87-9820-fd5512703b24","Type":"ContainerStarted","Data":"8e57d87f28c7c262514b154d9a5be56e6a0b618e74670848af330cfc8b0278ae"} Dec 02 10:29:01 crc kubenswrapper[4711]: I1202 10:29:01.618879 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-d77745b8c-lf4vw" Dec 02 10:29:02 crc kubenswrapper[4711]: I1202 10:29:02.318493 4711 generic.go:334] "Generic (PLEG): container finished" podID="9733fd7c-4874-4823-b16b-63068e79b83d" containerID="c790ece52c6888a4e58eec520a71d54cc816dff0e004371ca7e32d07407f143f" exitCode=0 Dec 02 10:29:02 crc kubenswrapper[4711]: I1202 10:29:02.318654 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws4kg" event={"ID":"9733fd7c-4874-4823-b16b-63068e79b83d","Type":"ContainerDied","Data":"c790ece52c6888a4e58eec520a71d54cc816dff0e004371ca7e32d07407f143f"} Dec 02 10:29:02 crc kubenswrapper[4711]: I1202 10:29:02.321806 4711 generic.go:334] "Generic (PLEG): container finished" podID="0d507f3f-edd5-4a87-9820-fd5512703b24" containerID="5ca168776f92f58e43bbd22d442d844293758ed30c6073da826aa81d30594e39" exitCode=0 Dec 02 10:29:02 crc kubenswrapper[4711]: I1202 10:29:02.321860 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58l5h" event={"ID":"0d507f3f-edd5-4a87-9820-fd5512703b24","Type":"ContainerDied","Data":"5ca168776f92f58e43bbd22d442d844293758ed30c6073da826aa81d30594e39"} Dec 02 10:29:03 crc kubenswrapper[4711]: I1202 10:29:03.726124 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vfcx6"] Dec 02 10:29:03 crc kubenswrapper[4711]: I1202 10:29:03.727274 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vfcx6" podUID="37e0d7c2-8395-406b-a7d8-86cabd810438" containerName="registry-server" containerID="cri-o://363009d8a9b3fcf6c5becfd1188629121a004b9efbe5d5f690c12b86ceb6565d" gracePeriod=2 Dec 02 10:29:04 crc kubenswrapper[4711]: I1202 10:29:04.342466 4711 generic.go:334] "Generic (PLEG): container finished" podID="37e0d7c2-8395-406b-a7d8-86cabd810438" containerID="363009d8a9b3fcf6c5becfd1188629121a004b9efbe5d5f690c12b86ceb6565d" exitCode=0 Dec 02 10:29:04 crc kubenswrapper[4711]: I1202 10:29:04.342608 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfcx6" event={"ID":"37e0d7c2-8395-406b-a7d8-86cabd810438","Type":"ContainerDied","Data":"363009d8a9b3fcf6c5becfd1188629121a004b9efbe5d5f690c12b86ceb6565d"} Dec 02 10:29:05 crc kubenswrapper[4711]: I1202 10:29:05.672599 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:29:05 crc kubenswrapper[4711]: I1202 10:29:05.772009 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e0d7c2-8395-406b-a7d8-86cabd810438-catalog-content\") pod \"37e0d7c2-8395-406b-a7d8-86cabd810438\" (UID: \"37e0d7c2-8395-406b-a7d8-86cabd810438\") " Dec 02 10:29:05 crc kubenswrapper[4711]: I1202 10:29:05.772183 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e0d7c2-8395-406b-a7d8-86cabd810438-utilities\") pod \"37e0d7c2-8395-406b-a7d8-86cabd810438\" (UID: \"37e0d7c2-8395-406b-a7d8-86cabd810438\") " Dec 02 10:29:05 crc kubenswrapper[4711]: I1202 10:29:05.772357 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7vpk\" (UniqueName: \"kubernetes.io/projected/37e0d7c2-8395-406b-a7d8-86cabd810438-kube-api-access-r7vpk\") pod \"37e0d7c2-8395-406b-a7d8-86cabd810438\" (UID: \"37e0d7c2-8395-406b-a7d8-86cabd810438\") " Dec 02 10:29:05 crc kubenswrapper[4711]: I1202 10:29:05.774866 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37e0d7c2-8395-406b-a7d8-86cabd810438-utilities" (OuterVolumeSpecName: "utilities") pod "37e0d7c2-8395-406b-a7d8-86cabd810438" (UID: "37e0d7c2-8395-406b-a7d8-86cabd810438"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:29:05 crc kubenswrapper[4711]: I1202 10:29:05.783107 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e0d7c2-8395-406b-a7d8-86cabd810438-kube-api-access-r7vpk" (OuterVolumeSpecName: "kube-api-access-r7vpk") pod "37e0d7c2-8395-406b-a7d8-86cabd810438" (UID: "37e0d7c2-8395-406b-a7d8-86cabd810438"). InnerVolumeSpecName "kube-api-access-r7vpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:29:05 crc kubenswrapper[4711]: I1202 10:29:05.852105 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37e0d7c2-8395-406b-a7d8-86cabd810438-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37e0d7c2-8395-406b-a7d8-86cabd810438" (UID: "37e0d7c2-8395-406b-a7d8-86cabd810438"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:29:05 crc kubenswrapper[4711]: I1202 10:29:05.874611 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e0d7c2-8395-406b-a7d8-86cabd810438-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:05 crc kubenswrapper[4711]: I1202 10:29:05.874676 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7vpk\" (UniqueName: \"kubernetes.io/projected/37e0d7c2-8395-406b-a7d8-86cabd810438-kube-api-access-r7vpk\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:05 crc kubenswrapper[4711]: I1202 10:29:05.874701 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e0d7c2-8395-406b-a7d8-86cabd810438-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:06 crc kubenswrapper[4711]: I1202 10:29:06.375647 4711 generic.go:334] "Generic (PLEG): container finished" podID="9733fd7c-4874-4823-b16b-63068e79b83d" containerID="6aec6066845ae61097e53ab8f1aea61facdde1cc08b8e80b3b631e10c86b2168" exitCode=0 Dec 02 10:29:06 crc kubenswrapper[4711]: I1202 10:29:06.375735 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws4kg" event={"ID":"9733fd7c-4874-4823-b16b-63068e79b83d","Type":"ContainerDied","Data":"6aec6066845ae61097e53ab8f1aea61facdde1cc08b8e80b3b631e10c86b2168"} Dec 02 10:29:06 crc kubenswrapper[4711]: I1202 10:29:06.382291 4711 generic.go:334] "Generic (PLEG): container finished" podID="0d507f3f-edd5-4a87-9820-fd5512703b24" containerID="73c481b78d3e8c3406de995eef7c5f3767ff67126b1080937802ab4c1ec02cdc" exitCode=0 Dec 02 10:29:06 crc kubenswrapper[4711]: I1202 10:29:06.382436 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58l5h" event={"ID":"0d507f3f-edd5-4a87-9820-fd5512703b24","Type":"ContainerDied","Data":"73c481b78d3e8c3406de995eef7c5f3767ff67126b1080937802ab4c1ec02cdc"} Dec 02 10:29:06 crc kubenswrapper[4711]: I1202 10:29:06.414419 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfcx6" event={"ID":"37e0d7c2-8395-406b-a7d8-86cabd810438","Type":"ContainerDied","Data":"091e13760946e8f07d415440efb7a25c1b2a94e5517364207d1a5817c35b6a65"} Dec 02 10:29:06 crc kubenswrapper[4711]: I1202 10:29:06.414478 4711 scope.go:117] "RemoveContainer" containerID="363009d8a9b3fcf6c5becfd1188629121a004b9efbe5d5f690c12b86ceb6565d" Dec 02 10:29:06 crc kubenswrapper[4711]: I1202 10:29:06.414524 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfcx6" Dec 02 10:29:06 crc kubenswrapper[4711]: I1202 10:29:06.449140 4711 scope.go:117] "RemoveContainer" containerID="a01c76d56e25b28d3233c9082fd63a532b2086af89093fc14ac5c8673ca91668" Dec 02 10:29:06 crc kubenswrapper[4711]: I1202 10:29:06.489658 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vfcx6"] Dec 02 10:29:06 crc kubenswrapper[4711]: I1202 10:29:06.490091 4711 scope.go:117] "RemoveContainer" containerID="7c7dc3fdcc1e27eb61d8caf26c817dc6a8036ef4534354654d8b5b8142d49a07" Dec 02 10:29:06 crc kubenswrapper[4711]: I1202 10:29:06.498924 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vfcx6"] Dec 02 10:29:07 crc kubenswrapper[4711]: I1202 10:29:07.093414 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37e0d7c2-8395-406b-a7d8-86cabd810438" path="/var/lib/kubelet/pods/37e0d7c2-8395-406b-a7d8-86cabd810438/volumes" Dec 02 10:29:07 crc kubenswrapper[4711]: I1202 10:29:07.423679 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58l5h" event={"ID":"0d507f3f-edd5-4a87-9820-fd5512703b24","Type":"ContainerStarted","Data":"5d47b837c494b5ef9c6209ca496c06392f35d3645e07554d46a074bcc16805c3"} Dec 02 10:29:07 crc kubenswrapper[4711]: I1202 10:29:07.429665 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws4kg" event={"ID":"9733fd7c-4874-4823-b16b-63068e79b83d","Type":"ContainerStarted","Data":"fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1"} Dec 02 10:29:07 crc kubenswrapper[4711]: I1202 10:29:07.446445 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-58l5h" podStartSLOduration=2.585746775 podStartE2EDuration="7.446409836s" podCreationTimestamp="2025-12-02 10:29:00 +0000 UTC" firstStartedPulling="2025-12-02 10:29:02.324864447 +0000 UTC m=+932.034230914" lastFinishedPulling="2025-12-02 10:29:07.185527518 +0000 UTC m=+936.894893975" observedRunningTime="2025-12-02 10:29:07.445774548 +0000 UTC m=+937.155141015" watchObservedRunningTime="2025-12-02 10:29:07.446409836 +0000 UTC m=+937.155776283" Dec 02 10:29:07 crc kubenswrapper[4711]: I1202 10:29:07.468725 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ws4kg" podStartSLOduration=2.63515674 podStartE2EDuration="7.468704349s" podCreationTimestamp="2025-12-02 10:29:00 +0000 UTC" firstStartedPulling="2025-12-02 10:29:02.324336451 +0000 UTC m=+932.033702918" lastFinishedPulling="2025-12-02 10:29:07.157884059 +0000 UTC m=+936.867250527" observedRunningTime="2025-12-02 10:29:07.463052854 +0000 UTC m=+937.172419311" watchObservedRunningTime="2025-12-02 10:29:07.468704349 +0000 UTC m=+937.178070796" Dec 02 10:29:10 crc kubenswrapper[4711]: I1202 10:29:10.452311 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:10 crc kubenswrapper[4711]: I1202 10:29:10.452823 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:10 crc kubenswrapper[4711]: I1202 10:29:10.513581 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:10 crc kubenswrapper[4711]: I1202 10:29:10.851393 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:10 crc kubenswrapper[4711]: I1202 10:29:10.851467 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:10 crc kubenswrapper[4711]: I1202 10:29:10.896470 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:20 crc kubenswrapper[4711]: I1202 10:29:20.489263 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:20 crc kubenswrapper[4711]: I1202 10:29:20.920502 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:22 crc kubenswrapper[4711]: I1202 10:29:22.586428 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:29:22 crc kubenswrapper[4711]: I1202 10:29:22.586532 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:29:23 crc kubenswrapper[4711]: I1202 10:29:23.713318 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58l5h"] Dec 02 10:29:23 crc kubenswrapper[4711]: I1202 10:29:23.713829 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-58l5h" podUID="0d507f3f-edd5-4a87-9820-fd5512703b24" containerName="registry-server" containerID="cri-o://5d47b837c494b5ef9c6209ca496c06392f35d3645e07554d46a074bcc16805c3" gracePeriod=2 Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.529525 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ws4kg"] Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.530080 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ws4kg" podUID="9733fd7c-4874-4823-b16b-63068e79b83d" containerName="registry-server" containerID="cri-o://fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1" gracePeriod=2 Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.553220 4711 generic.go:334] "Generic (PLEG): container finished" podID="0d507f3f-edd5-4a87-9820-fd5512703b24" containerID="5d47b837c494b5ef9c6209ca496c06392f35d3645e07554d46a074bcc16805c3" exitCode=0 Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.553266 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58l5h" event={"ID":"0d507f3f-edd5-4a87-9820-fd5512703b24","Type":"ContainerDied","Data":"5d47b837c494b5ef9c6209ca496c06392f35d3645e07554d46a074bcc16805c3"} Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.686065 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.849399 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d507f3f-edd5-4a87-9820-fd5512703b24-catalog-content\") pod \"0d507f3f-edd5-4a87-9820-fd5512703b24\" (UID: \"0d507f3f-edd5-4a87-9820-fd5512703b24\") " Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.849495 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g88s\" (UniqueName: \"kubernetes.io/projected/0d507f3f-edd5-4a87-9820-fd5512703b24-kube-api-access-2g88s\") pod \"0d507f3f-edd5-4a87-9820-fd5512703b24\" (UID: \"0d507f3f-edd5-4a87-9820-fd5512703b24\") " Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.849560 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d507f3f-edd5-4a87-9820-fd5512703b24-utilities\") pod \"0d507f3f-edd5-4a87-9820-fd5512703b24\" (UID: \"0d507f3f-edd5-4a87-9820-fd5512703b24\") " Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.850604 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d507f3f-edd5-4a87-9820-fd5512703b24-utilities" (OuterVolumeSpecName: "utilities") pod "0d507f3f-edd5-4a87-9820-fd5512703b24" (UID: "0d507f3f-edd5-4a87-9820-fd5512703b24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.879118 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d507f3f-edd5-4a87-9820-fd5512703b24-kube-api-access-2g88s" (OuterVolumeSpecName: "kube-api-access-2g88s") pod "0d507f3f-edd5-4a87-9820-fd5512703b24" (UID: "0d507f3f-edd5-4a87-9820-fd5512703b24"). InnerVolumeSpecName "kube-api-access-2g88s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.952634 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d507f3f-edd5-4a87-9820-fd5512703b24-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.952917 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g88s\" (UniqueName: \"kubernetes.io/projected/0d507f3f-edd5-4a87-9820-fd5512703b24-kube-api-access-2g88s\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.964352 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d507f3f-edd5-4a87-9820-fd5512703b24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d507f3f-edd5-4a87-9820-fd5512703b24" (UID: "0d507f3f-edd5-4a87-9820-fd5512703b24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:29:24 crc kubenswrapper[4711]: I1202 10:29:24.993683 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.054104 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d507f3f-edd5-4a87-9820-fd5512703b24-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.154865 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9733fd7c-4874-4823-b16b-63068e79b83d-utilities\") pod \"9733fd7c-4874-4823-b16b-63068e79b83d\" (UID: \"9733fd7c-4874-4823-b16b-63068e79b83d\") " Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.155006 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9733fd7c-4874-4823-b16b-63068e79b83d-catalog-content\") pod \"9733fd7c-4874-4823-b16b-63068e79b83d\" (UID: \"9733fd7c-4874-4823-b16b-63068e79b83d\") " Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.155039 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r97z2\" (UniqueName: \"kubernetes.io/projected/9733fd7c-4874-4823-b16b-63068e79b83d-kube-api-access-r97z2\") pod \"9733fd7c-4874-4823-b16b-63068e79b83d\" (UID: \"9733fd7c-4874-4823-b16b-63068e79b83d\") " Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.155714 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9733fd7c-4874-4823-b16b-63068e79b83d-utilities" (OuterVolumeSpecName: "utilities") pod "9733fd7c-4874-4823-b16b-63068e79b83d" (UID: "9733fd7c-4874-4823-b16b-63068e79b83d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.159721 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9733fd7c-4874-4823-b16b-63068e79b83d-kube-api-access-r97z2" (OuterVolumeSpecName: "kube-api-access-r97z2") pod "9733fd7c-4874-4823-b16b-63068e79b83d" (UID: "9733fd7c-4874-4823-b16b-63068e79b83d"). InnerVolumeSpecName "kube-api-access-r97z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.174009 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9733fd7c-4874-4823-b16b-63068e79b83d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9733fd7c-4874-4823-b16b-63068e79b83d" (UID: "9733fd7c-4874-4823-b16b-63068e79b83d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.258030 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9733fd7c-4874-4823-b16b-63068e79b83d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.258061 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r97z2\" (UniqueName: \"kubernetes.io/projected/9733fd7c-4874-4823-b16b-63068e79b83d-kube-api-access-r97z2\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.258073 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9733fd7c-4874-4823-b16b-63068e79b83d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.565751 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58l5h" event={"ID":"0d507f3f-edd5-4a87-9820-fd5512703b24","Type":"ContainerDied","Data":"8e57d87f28c7c262514b154d9a5be56e6a0b618e74670848af330cfc8b0278ae"} Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.565802 4711 scope.go:117] "RemoveContainer" containerID="5d47b837c494b5ef9c6209ca496c06392f35d3645e07554d46a074bcc16805c3" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.565867 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58l5h" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.569440 4711 generic.go:334] "Generic (PLEG): container finished" podID="9733fd7c-4874-4823-b16b-63068e79b83d" containerID="fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1" exitCode=0 Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.569484 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws4kg" event={"ID":"9733fd7c-4874-4823-b16b-63068e79b83d","Type":"ContainerDied","Data":"fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1"} Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.569514 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ws4kg" event={"ID":"9733fd7c-4874-4823-b16b-63068e79b83d","Type":"ContainerDied","Data":"a9262bae62e7a1ffa40b86948d8574d0e3dd18be1e409b941c3d547afbbddb1e"} Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.569556 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ws4kg" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.600885 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58l5h"] Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.603996 4711 scope.go:117] "RemoveContainer" containerID="73c481b78d3e8c3406de995eef7c5f3767ff67126b1080937802ab4c1ec02cdc" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.606315 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-58l5h"] Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.623232 4711 scope.go:117] "RemoveContainer" containerID="5ca168776f92f58e43bbd22d442d844293758ed30c6073da826aa81d30594e39" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.636535 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ws4kg"] Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.640843 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ws4kg"] Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.647584 4711 scope.go:117] "RemoveContainer" containerID="fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.667423 4711 scope.go:117] "RemoveContainer" containerID="6aec6066845ae61097e53ab8f1aea61facdde1cc08b8e80b3b631e10c86b2168" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.685717 4711 scope.go:117] "RemoveContainer" containerID="c790ece52c6888a4e58eec520a71d54cc816dff0e004371ca7e32d07407f143f" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.699322 4711 scope.go:117] "RemoveContainer" containerID="fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1" Dec 02 10:29:25 crc kubenswrapper[4711]: E1202 10:29:25.699820 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1\": container with ID starting with fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1 not found: ID does not exist" containerID="fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.699873 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1"} err="failed to get container status \"fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1\": rpc error: code = NotFound desc = could not find container \"fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1\": container with ID starting with fa8492bc385ffe6e5d5363eb89c11c8042d70e8b33b5eb317411235e3b041ff1 not found: ID does not exist" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.699905 4711 scope.go:117] "RemoveContainer" containerID="6aec6066845ae61097e53ab8f1aea61facdde1cc08b8e80b3b631e10c86b2168" Dec 02 10:29:25 crc kubenswrapper[4711]: E1202 10:29:25.700258 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aec6066845ae61097e53ab8f1aea61facdde1cc08b8e80b3b631e10c86b2168\": container with ID starting with 6aec6066845ae61097e53ab8f1aea61facdde1cc08b8e80b3b631e10c86b2168 not found: ID does not exist" containerID="6aec6066845ae61097e53ab8f1aea61facdde1cc08b8e80b3b631e10c86b2168" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.700321 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aec6066845ae61097e53ab8f1aea61facdde1cc08b8e80b3b631e10c86b2168"} err="failed to get container status \"6aec6066845ae61097e53ab8f1aea61facdde1cc08b8e80b3b631e10c86b2168\": rpc error: code = NotFound desc = could not find container \"6aec6066845ae61097e53ab8f1aea61facdde1cc08b8e80b3b631e10c86b2168\": container with ID starting with 6aec6066845ae61097e53ab8f1aea61facdde1cc08b8e80b3b631e10c86b2168 not found: ID does not exist" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.700349 4711 scope.go:117] "RemoveContainer" containerID="c790ece52c6888a4e58eec520a71d54cc816dff0e004371ca7e32d07407f143f" Dec 02 10:29:25 crc kubenswrapper[4711]: E1202 10:29:25.700837 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c790ece52c6888a4e58eec520a71d54cc816dff0e004371ca7e32d07407f143f\": container with ID starting with c790ece52c6888a4e58eec520a71d54cc816dff0e004371ca7e32d07407f143f not found: ID does not exist" containerID="c790ece52c6888a4e58eec520a71d54cc816dff0e004371ca7e32d07407f143f" Dec 02 10:29:25 crc kubenswrapper[4711]: I1202 10:29:25.700871 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c790ece52c6888a4e58eec520a71d54cc816dff0e004371ca7e32d07407f143f"} err="failed to get container status \"c790ece52c6888a4e58eec520a71d54cc816dff0e004371ca7e32d07407f143f\": rpc error: code = NotFound desc = could not find container \"c790ece52c6888a4e58eec520a71d54cc816dff0e004371ca7e32d07407f143f\": container with ID starting with c790ece52c6888a4e58eec520a71d54cc816dff0e004371ca7e32d07407f143f not found: ID does not exist" Dec 02 10:29:27 crc kubenswrapper[4711]: I1202 10:29:27.088697 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d507f3f-edd5-4a87-9820-fd5512703b24" path="/var/lib/kubelet/pods/0d507f3f-edd5-4a87-9820-fd5512703b24/volumes" Dec 02 10:29:27 crc kubenswrapper[4711]: I1202 10:29:27.091147 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9733fd7c-4874-4823-b16b-63068e79b83d" path="/var/lib/kubelet/pods/9733fd7c-4874-4823-b16b-63068e79b83d/volumes" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.703302 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n"] Dec 02 10:29:30 crc kubenswrapper[4711]: E1202 10:29:30.703789 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9733fd7c-4874-4823-b16b-63068e79b83d" containerName="registry-server" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.703809 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="9733fd7c-4874-4823-b16b-63068e79b83d" containerName="registry-server" Dec 02 10:29:30 crc kubenswrapper[4711]: E1202 10:29:30.703824 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e0d7c2-8395-406b-a7d8-86cabd810438" containerName="extract-content" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.703833 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e0d7c2-8395-406b-a7d8-86cabd810438" containerName="extract-content" Dec 02 10:29:30 crc kubenswrapper[4711]: E1202 10:29:30.703841 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e0d7c2-8395-406b-a7d8-86cabd810438" containerName="registry-server" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.703847 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e0d7c2-8395-406b-a7d8-86cabd810438" containerName="registry-server" Dec 02 10:29:30 crc kubenswrapper[4711]: E1202 10:29:30.703856 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d507f3f-edd5-4a87-9820-fd5512703b24" containerName="extract-content" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.703861 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d507f3f-edd5-4a87-9820-fd5512703b24" containerName="extract-content" Dec 02 10:29:30 crc kubenswrapper[4711]: E1202 10:29:30.703875 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9733fd7c-4874-4823-b16b-63068e79b83d" containerName="extract-content" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.703880 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="9733fd7c-4874-4823-b16b-63068e79b83d" containerName="extract-content" Dec 02 10:29:30 crc kubenswrapper[4711]: E1202 10:29:30.703887 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9733fd7c-4874-4823-b16b-63068e79b83d" containerName="extract-utilities" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.703893 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="9733fd7c-4874-4823-b16b-63068e79b83d" containerName="extract-utilities" Dec 02 10:29:30 crc kubenswrapper[4711]: E1202 10:29:30.703903 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e0d7c2-8395-406b-a7d8-86cabd810438" containerName="extract-utilities" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.703908 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e0d7c2-8395-406b-a7d8-86cabd810438" containerName="extract-utilities" Dec 02 10:29:30 crc kubenswrapper[4711]: E1202 10:29:30.703920 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d507f3f-edd5-4a87-9820-fd5512703b24" containerName="registry-server" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.703925 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d507f3f-edd5-4a87-9820-fd5512703b24" containerName="registry-server" Dec 02 10:29:30 crc kubenswrapper[4711]: E1202 10:29:30.703934 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d507f3f-edd5-4a87-9820-fd5512703b24" containerName="extract-utilities" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.703940 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d507f3f-edd5-4a87-9820-fd5512703b24" containerName="extract-utilities" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.704078 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e0d7c2-8395-406b-a7d8-86cabd810438" containerName="registry-server" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.704096 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="9733fd7c-4874-4823-b16b-63068e79b83d" containerName="registry-server" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.704107 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d507f3f-edd5-4a87-9820-fd5512703b24" containerName="registry-server" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.704721 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.709255 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vfkdm" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.717340 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.726778 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.727924 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.729731 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qp2f7" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.744048 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.764684 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9rlk\" (UniqueName: \"kubernetes.io/projected/10c23c28-0e51-465d-ba7c-1becd6a7b5ee-kube-api-access-w9rlk\") pod \"barbican-operator-controller-manager-7d9dfd778-zsv2n\" (UID: \"10c23c28-0e51-465d-ba7c-1becd6a7b5ee\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.764766 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgjtf\" (UniqueName: \"kubernetes.io/projected/d5039117-0162-4158-b6f7-a3dedff319fb-kube-api-access-qgjtf\") pod \"cinder-operator-controller-manager-859b6ccc6-n9x57\" (UID: \"d5039117-0162-4158-b6f7-a3dedff319fb\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.764904 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.765908 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.768682 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-sbgsf" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.772723 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.773604 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.775647 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ntt29" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.788333 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.804693 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.809839 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.811141 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.816724 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8rhzn" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.827506 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.838126 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.839677 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.853599 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-vs99d" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.853730 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.865070 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.865435 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gws7d\" (UniqueName: \"kubernetes.io/projected/26eb7b16-7210-459f-baac-e740acdb363e-kube-api-access-gws7d\") pod \"designate-operator-controller-manager-78b4bc895b-ktj75\" (UID: \"26eb7b16-7210-459f-baac-e740acdb363e\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.865493 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9rlk\" (UniqueName: \"kubernetes.io/projected/10c23c28-0e51-465d-ba7c-1becd6a7b5ee-kube-api-access-w9rlk\") pod \"barbican-operator-controller-manager-7d9dfd778-zsv2n\" (UID: \"10c23c28-0e51-465d-ba7c-1becd6a7b5ee\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.865531 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgjtf\" (UniqueName: \"kubernetes.io/projected/d5039117-0162-4158-b6f7-a3dedff319fb-kube-api-access-qgjtf\") pod \"cinder-operator-controller-manager-859b6ccc6-n9x57\" (UID: \"d5039117-0162-4158-b6f7-a3dedff319fb\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.865550 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4v82\" (UniqueName: \"kubernetes.io/projected/8284f010-fa2e-45fd-aa0f-46958a91102b-kube-api-access-f4v82\") pod \"heat-operator-controller-manager-5f64f6f8bb-v98q2\" (UID: \"8284f010-fa2e-45fd-aa0f-46958a91102b\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.865589 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgn2w\" (UniqueName: \"kubernetes.io/projected/a1984464-0dac-491f-a2f7-bc1f9214fef8-kube-api-access-dgn2w\") pod \"glance-operator-controller-manager-77987cd8cd-pbvd2\" (UID: \"a1984464-0dac-491f-a2f7-bc1f9214fef8\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.897385 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.897842 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-49lww" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.897965 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.911739 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgjtf\" (UniqueName: \"kubernetes.io/projected/d5039117-0162-4158-b6f7-a3dedff319fb-kube-api-access-qgjtf\") pod \"cinder-operator-controller-manager-859b6ccc6-n9x57\" (UID: \"d5039117-0162-4158-b6f7-a3dedff319fb\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.922292 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.925696 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9rlk\" (UniqueName: \"kubernetes.io/projected/10c23c28-0e51-465d-ba7c-1becd6a7b5ee-kube-api-access-w9rlk\") pod \"barbican-operator-controller-manager-7d9dfd778-zsv2n\" (UID: \"10c23c28-0e51-465d-ba7c-1becd6a7b5ee\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.943920 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.959139 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.967607 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.967730 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-7669g" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.968903 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgn2w\" (UniqueName: \"kubernetes.io/projected/a1984464-0dac-491f-a2f7-bc1f9214fef8-kube-api-access-dgn2w\") pod \"glance-operator-controller-manager-77987cd8cd-pbvd2\" (UID: \"a1984464-0dac-491f-a2f7-bc1f9214fef8\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.968982 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2748\" (UniqueName: \"kubernetes.io/projected/90b53574-c0b1-4bc6-ba22-238abb3c5b32-kube-api-access-d2748\") pod \"infra-operator-controller-manager-57548d458d-tnkm7\" (UID: \"90b53574-c0b1-4bc6-ba22-238abb3c5b32\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.969034 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gws7d\" (UniqueName: \"kubernetes.io/projected/26eb7b16-7210-459f-baac-e740acdb363e-kube-api-access-gws7d\") pod \"designate-operator-controller-manager-78b4bc895b-ktj75\" (UID: \"26eb7b16-7210-459f-baac-e740acdb363e\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.969056 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnvgr\" (UniqueName: \"kubernetes.io/projected/59853ec3-31ef-402d-8f5f-c12528b688f0-kube-api-access-qnvgr\") pod \"horizon-operator-controller-manager-68c6d99b8f-bqhpj\" (UID: \"59853ec3-31ef-402d-8f5f-c12528b688f0\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.970739 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert\") pod \"infra-operator-controller-manager-57548d458d-tnkm7\" (UID: \"90b53574-c0b1-4bc6-ba22-238abb3c5b32\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.970782 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4v82\" (UniqueName: \"kubernetes.io/projected/8284f010-fa2e-45fd-aa0f-46958a91102b-kube-api-access-f4v82\") pod \"heat-operator-controller-manager-5f64f6f8bb-v98q2\" (UID: \"8284f010-fa2e-45fd-aa0f-46958a91102b\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.980903 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.982147 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.983107 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.983247 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.989069 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d"] Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.990565 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-g8m44" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.990835 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jlm2p" Dec 02 10:29:30 crc kubenswrapper[4711]: I1202 10:29:30.993545 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4v82\" (UniqueName: \"kubernetes.io/projected/8284f010-fa2e-45fd-aa0f-46958a91102b-kube-api-access-f4v82\") pod \"heat-operator-controller-manager-5f64f6f8bb-v98q2\" (UID: \"8284f010-fa2e-45fd-aa0f-46958a91102b\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.005651 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgn2w\" (UniqueName: \"kubernetes.io/projected/a1984464-0dac-491f-a2f7-bc1f9214fef8-kube-api-access-dgn2w\") pod \"glance-operator-controller-manager-77987cd8cd-pbvd2\" (UID: \"a1984464-0dac-491f-a2f7-bc1f9214fef8\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.015990 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.019591 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gws7d\" (UniqueName: \"kubernetes.io/projected/26eb7b16-7210-459f-baac-e740acdb363e-kube-api-access-gws7d\") pod \"designate-operator-controller-manager-78b4bc895b-ktj75\" (UID: \"26eb7b16-7210-459f-baac-e740acdb363e\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.025150 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.030803 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.031801 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.032605 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.032804 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.034865 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.035357 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lmnwx" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.035586 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-g42bv" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.037325 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.049397 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5xlv6" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.053044 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qp2f7" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.059587 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.071655 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shnbf\" (UniqueName: \"kubernetes.io/projected/0b12ad88-acba-4d9f-82ac-f59d3ca57ac8-kube-api-access-shnbf\") pod \"keystone-operator-controller-manager-7765d96ddf-xkq8d\" (UID: \"0b12ad88-acba-4d9f-82ac-f59d3ca57ac8\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.071718 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnvgr\" (UniqueName: \"kubernetes.io/projected/59853ec3-31ef-402d-8f5f-c12528b688f0-kube-api-access-qnvgr\") pod \"horizon-operator-controller-manager-68c6d99b8f-bqhpj\" (UID: \"59853ec3-31ef-402d-8f5f-c12528b688f0\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.071743 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n744f\" (UniqueName: \"kubernetes.io/projected/102348ad-5257-4114-acd6-e0e6c60a3c2b-kube-api-access-n744f\") pod \"ironic-operator-controller-manager-6c548fd776-cwr6k\" (UID: \"102348ad-5257-4114-acd6-e0e6c60a3c2b\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.071782 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert\") pod \"infra-operator-controller-manager-57548d458d-tnkm7\" (UID: \"90b53574-c0b1-4bc6-ba22-238abb3c5b32\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.071804 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcmd4\" (UniqueName: \"kubernetes.io/projected/68951454-b246-49ab-b604-a62c48e0b2ea-kube-api-access-mcmd4\") pod \"nova-operator-controller-manager-697bc559fc-wdt5k\" (UID: \"68951454-b246-49ab-b604-a62c48e0b2ea\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.071827 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q249h\" (UniqueName: \"kubernetes.io/projected/1f5bd4c4-1262-47a2-94fb-bce66ebe7929-kube-api-access-q249h\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-99nd2\" (UID: \"1f5bd4c4-1262-47a2-94fb-bce66ebe7929\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.071844 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gk9c\" (UniqueName: \"kubernetes.io/projected/7e3c4c79-5009-40f8-80f9-0d30bf57cc5a-kube-api-access-5gk9c\") pod \"mariadb-operator-controller-manager-56bbcc9d85-lsgwm\" (UID: \"7e3c4c79-5009-40f8-80f9-0d30bf57cc5a\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.071871 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqpht\" (UniqueName: \"kubernetes.io/projected/9d8cab18-532c-45c8-ba21-6f3bee02c722-kube-api-access-zqpht\") pod \"manila-operator-controller-manager-7c79b5df47-5nkxr\" (UID: \"9d8cab18-532c-45c8-ba21-6f3bee02c722\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.071915 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2748\" (UniqueName: \"kubernetes.io/projected/90b53574-c0b1-4bc6-ba22-238abb3c5b32-kube-api-access-d2748\") pod \"infra-operator-controller-manager-57548d458d-tnkm7\" (UID: \"90b53574-c0b1-4bc6-ba22-238abb3c5b32\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.075048 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-24pj6"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.076753 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 02 10:29:31 crc kubenswrapper[4711]: E1202 10:29:31.084334 4711 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 10:29:31 crc kubenswrapper[4711]: E1202 10:29:31.084435 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert podName:90b53574-c0b1-4bc6-ba22-238abb3c5b32 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:31.584403569 +0000 UTC m=+961.293770016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert") pod "infra-operator-controller-manager-57548d458d-tnkm7" (UID: "90b53574-c0b1-4bc6-ba22-238abb3c5b32") : secret "infra-operator-webhook-server-cert" not found Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.085324 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-24pj6" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.103316 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-bl4vx" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.107473 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2748\" (UniqueName: \"kubernetes.io/projected/90b53574-c0b1-4bc6-ba22-238abb3c5b32-kube-api-access-d2748\") pod \"infra-operator-controller-manager-57548d458d-tnkm7\" (UID: \"90b53574-c0b1-4bc6-ba22-238abb3c5b32\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.109285 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-sbgsf" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.111299 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.111325 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.111339 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-24pj6"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.111353 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.113145 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ntt29" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.113407 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.121743 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.123672 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.123735 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnvgr\" (UniqueName: \"kubernetes.io/projected/59853ec3-31ef-402d-8f5f-c12528b688f0-kube-api-access-qnvgr\") pod \"horizon-operator-controller-manager-68c6d99b8f-bqhpj\" (UID: \"59853ec3-31ef-402d-8f5f-c12528b688f0\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.124733 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.128524 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.129943 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.130425 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-hdq2r" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.132290 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9xp7j" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.132303 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.141836 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.151018 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.160176 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.161762 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.166189 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8rhzn" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.169435 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-wtfsl" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.173377 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.174387 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.174869 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.175969 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxrnw\" (UniqueName: \"kubernetes.io/projected/0872909b-ee36-482c-a6d7-f6d7ee6cc5ff-kube-api-access-wxrnw\") pod \"ovn-operator-controller-manager-b6456fdb6-l7s28\" (UID: \"0872909b-ee36-482c-a6d7-f6d7ee6cc5ff\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.176027 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs\" (UID: \"6ef2b37f-78be-4a19-9d1b-b7d982032aab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.176070 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shnbf\" (UniqueName: \"kubernetes.io/projected/0b12ad88-acba-4d9f-82ac-f59d3ca57ac8-kube-api-access-shnbf\") pod \"keystone-operator-controller-manager-7765d96ddf-xkq8d\" (UID: \"0b12ad88-acba-4d9f-82ac-f59d3ca57ac8\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.176105 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n744f\" (UniqueName: \"kubernetes.io/projected/102348ad-5257-4114-acd6-e0e6c60a3c2b-kube-api-access-n744f\") pod \"ironic-operator-controller-manager-6c548fd776-cwr6k\" (UID: \"102348ad-5257-4114-acd6-e0e6c60a3c2b\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.176246 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcmd4\" (UniqueName: \"kubernetes.io/projected/68951454-b246-49ab-b604-a62c48e0b2ea-kube-api-access-mcmd4\") pod \"nova-operator-controller-manager-697bc559fc-wdt5k\" (UID: \"68951454-b246-49ab-b604-a62c48e0b2ea\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.176290 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f464r\" (UniqueName: \"kubernetes.io/projected/f4109dad-388a-493d-b026-6cd10b9f76dd-kube-api-access-f464r\") pod \"octavia-operator-controller-manager-998648c74-24pj6\" (UID: \"f4109dad-388a-493d-b026-6cd10b9f76dd\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-24pj6" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.176323 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q249h\" (UniqueName: \"kubernetes.io/projected/1f5bd4c4-1262-47a2-94fb-bce66ebe7929-kube-api-access-q249h\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-99nd2\" (UID: \"1f5bd4c4-1262-47a2-94fb-bce66ebe7929\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.176359 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gk9c\" (UniqueName: \"kubernetes.io/projected/7e3c4c79-5009-40f8-80f9-0d30bf57cc5a-kube-api-access-5gk9c\") pod \"mariadb-operator-controller-manager-56bbcc9d85-lsgwm\" (UID: \"7e3c4c79-5009-40f8-80f9-0d30bf57cc5a\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.176377 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqpht\" (UniqueName: \"kubernetes.io/projected/9d8cab18-532c-45c8-ba21-6f3bee02c722-kube-api-access-zqpht\") pod \"manila-operator-controller-manager-7c79b5df47-5nkxr\" (UID: \"9d8cab18-532c-45c8-ba21-6f3bee02c722\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.176407 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7kx6\" (UniqueName: \"kubernetes.io/projected/6ef2b37f-78be-4a19-9d1b-b7d982032aab-kube-api-access-q7kx6\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs\" (UID: \"6ef2b37f-78be-4a19-9d1b-b7d982032aab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.177444 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rct5s" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.180733 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.214717 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shnbf\" (UniqueName: \"kubernetes.io/projected/0b12ad88-acba-4d9f-82ac-f59d3ca57ac8-kube-api-access-shnbf\") pod \"keystone-operator-controller-manager-7765d96ddf-xkq8d\" (UID: \"0b12ad88-acba-4d9f-82ac-f59d3ca57ac8\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.223868 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.223909 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.224017 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.227735 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-g5qft" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.228364 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n744f\" (UniqueName: \"kubernetes.io/projected/102348ad-5257-4114-acd6-e0e6c60a3c2b-kube-api-access-n744f\") pod \"ironic-operator-controller-manager-6c548fd776-cwr6k\" (UID: \"102348ad-5257-4114-acd6-e0e6c60a3c2b\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.236663 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-vs99d" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.244975 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.251810 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gk9c\" (UniqueName: \"kubernetes.io/projected/7e3c4c79-5009-40f8-80f9-0d30bf57cc5a-kube-api-access-5gk9c\") pod \"mariadb-operator-controller-manager-56bbcc9d85-lsgwm\" (UID: \"7e3c4c79-5009-40f8-80f9-0d30bf57cc5a\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.262028 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q249h\" (UniqueName: \"kubernetes.io/projected/1f5bd4c4-1262-47a2-94fb-bce66ebe7929-kube-api-access-q249h\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-99nd2\" (UID: \"1f5bd4c4-1262-47a2-94fb-bce66ebe7929\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.271007 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcmd4\" (UniqueName: \"kubernetes.io/projected/68951454-b246-49ab-b604-a62c48e0b2ea-kube-api-access-mcmd4\") pod \"nova-operator-controller-manager-697bc559fc-wdt5k\" (UID: \"68951454-b246-49ab-b604-a62c48e0b2ea\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.271311 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqpht\" (UniqueName: \"kubernetes.io/projected/9d8cab18-532c-45c8-ba21-6f3bee02c722-kube-api-access-zqpht\") pod \"manila-operator-controller-manager-7c79b5df47-5nkxr\" (UID: \"9d8cab18-532c-45c8-ba21-6f3bee02c722\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.273462 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.279440 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-798r9\" (UniqueName: \"kubernetes.io/projected/03d9d400-b25b-4ac4-bad3-55afbae399e4-kube-api-access-798r9\") pod \"placement-operator-controller-manager-78f8948974-mhr7r\" (UID: \"03d9d400-b25b-4ac4-bad3-55afbae399e4\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.279513 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t75d2\" (UniqueName: \"kubernetes.io/projected/4551cf35-cc78-43c0-a468-2e6518e336ff-kube-api-access-t75d2\") pod \"telemetry-operator-controller-manager-999cf8558-p99s8\" (UID: \"4551cf35-cc78-43c0-a468-2e6518e336ff\") " pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.283345 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f464r\" (UniqueName: \"kubernetes.io/projected/f4109dad-388a-493d-b026-6cd10b9f76dd-kube-api-access-f464r\") pod \"octavia-operator-controller-manager-998648c74-24pj6\" (UID: \"f4109dad-388a-493d-b026-6cd10b9f76dd\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-24pj6" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.283438 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2pf5\" (UniqueName: \"kubernetes.io/projected/7f7481f9-19ef-4b29-95ef-043c7306f5cc-kube-api-access-j2pf5\") pod \"swift-operator-controller-manager-5f8c65bbfc-xdfmz\" (UID: \"7f7481f9-19ef-4b29-95ef-043c7306f5cc\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.283504 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7kx6\" (UniqueName: \"kubernetes.io/projected/6ef2b37f-78be-4a19-9d1b-b7d982032aab-kube-api-access-q7kx6\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs\" (UID: \"6ef2b37f-78be-4a19-9d1b-b7d982032aab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.283602 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxrnw\" (UniqueName: \"kubernetes.io/projected/0872909b-ee36-482c-a6d7-f6d7ee6cc5ff-kube-api-access-wxrnw\") pod \"ovn-operator-controller-manager-b6456fdb6-l7s28\" (UID: \"0872909b-ee36-482c-a6d7-f6d7ee6cc5ff\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.283642 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs\" (UID: \"6ef2b37f-78be-4a19-9d1b-b7d982032aab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:31 crc kubenswrapper[4711]: E1202 10:29:31.283979 4711 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:29:31 crc kubenswrapper[4711]: E1202 10:29:31.284043 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert podName:6ef2b37f-78be-4a19-9d1b-b7d982032aab nodeName:}" failed. No retries permitted until 2025-12-02 10:29:31.784026854 +0000 UTC m=+961.493393301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" (UID: "6ef2b37f-78be-4a19-9d1b-b7d982032aab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.304136 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-7669g" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.311796 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f464r\" (UniqueName: \"kubernetes.io/projected/f4109dad-388a-493d-b026-6cd10b9f76dd-kube-api-access-f464r\") pod \"octavia-operator-controller-manager-998648c74-24pj6\" (UID: \"f4109dad-388a-493d-b026-6cd10b9f76dd\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-24pj6" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.312774 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.312861 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7kx6\" (UniqueName: \"kubernetes.io/projected/6ef2b37f-78be-4a19-9d1b-b7d982032aab-kube-api-access-q7kx6\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs\" (UID: \"6ef2b37f-78be-4a19-9d1b-b7d982032aab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.339539 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxrnw\" (UniqueName: \"kubernetes.io/projected/0872909b-ee36-482c-a6d7-f6d7ee6cc5ff-kube-api-access-wxrnw\") pod \"ovn-operator-controller-manager-b6456fdb6-l7s28\" (UID: \"0872909b-ee36-482c-a6d7-f6d7ee6cc5ff\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.386162 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-fj96f"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.386722 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-798r9\" (UniqueName: \"kubernetes.io/projected/03d9d400-b25b-4ac4-bad3-55afbae399e4-kube-api-access-798r9\") pod \"placement-operator-controller-manager-78f8948974-mhr7r\" (UID: \"03d9d400-b25b-4ac4-bad3-55afbae399e4\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.386766 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t75d2\" (UniqueName: \"kubernetes.io/projected/4551cf35-cc78-43c0-a468-2e6518e336ff-kube-api-access-t75d2\") pod \"telemetry-operator-controller-manager-999cf8558-p99s8\" (UID: \"4551cf35-cc78-43c0-a468-2e6518e336ff\") " pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.386832 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2pf5\" (UniqueName: \"kubernetes.io/projected/7f7481f9-19ef-4b29-95ef-043c7306f5cc-kube-api-access-j2pf5\") pod \"swift-operator-controller-manager-5f8c65bbfc-xdfmz\" (UID: \"7f7481f9-19ef-4b29-95ef-043c7306f5cc\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.388195 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.391389 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-r9nt5" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.397256 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-fj96f"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.409803 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-798r9\" (UniqueName: \"kubernetes.io/projected/03d9d400-b25b-4ac4-bad3-55afbae399e4-kube-api-access-798r9\") pod \"placement-operator-controller-manager-78f8948974-mhr7r\" (UID: \"03d9d400-b25b-4ac4-bad3-55afbae399e4\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.412820 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2pf5\" (UniqueName: \"kubernetes.io/projected/7f7481f9-19ef-4b29-95ef-043c7306f5cc-kube-api-access-j2pf5\") pod \"swift-operator-controller-manager-5f8c65bbfc-xdfmz\" (UID: \"7f7481f9-19ef-4b29-95ef-043c7306f5cc\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.420401 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-g8m44" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.433301 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.450664 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jlm2p" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.451363 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.454158 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-g42bv" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.456101 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.457156 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.461878 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.462510 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-g2z7p" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.463626 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t75d2\" (UniqueName: \"kubernetes.io/projected/4551cf35-cc78-43c0-a468-2e6518e336ff-kube-api-access-t75d2\") pod \"telemetry-operator-controller-manager-999cf8558-p99s8\" (UID: \"4551cf35-cc78-43c0-a468-2e6518e336ff\") " pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.466676 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.477558 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lmnwx" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.488566 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ns5t\" (UniqueName: \"kubernetes.io/projected/13bbf4f3-8a73-45b8-80f5-52907db710c0-kube-api-access-8ns5t\") pod \"watcher-operator-controller-manager-769dc69bc-7hlb8\" (UID: \"13bbf4f3-8a73-45b8-80f5-52907db710c0\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.488627 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8d6b\" (UniqueName: \"kubernetes.io/projected/01685621-3c95-4091-a03a-de8d25c67efd-kube-api-access-s8d6b\") pod \"test-operator-controller-manager-5854674fcc-fj96f\" (UID: \"01685621-3c95-4091-a03a-de8d25c67efd\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.490730 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.490858 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.499080 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.499941 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.503428 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.503626 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.504831 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xrbr8" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.520137 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-24pj6" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.524226 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.546334 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.563774 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.564689 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.580155 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-n2l9q" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.597007 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.597629 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv2cn\" (UniqueName: \"kubernetes.io/projected/15eaa14e-a3cd-4e68-8531-741ae62b9d58-kube-api-access-nv2cn\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.597694 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7njvt\" (UniqueName: \"kubernetes.io/projected/2c9ae2aa-9390-409b-b50f-61295577580a-kube-api-access-7njvt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k8d2c\" (UID: \"2c9ae2aa-9390-409b-b50f-61295577580a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.597713 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.597747 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert\") pod \"infra-operator-controller-manager-57548d458d-tnkm7\" (UID: \"90b53574-c0b1-4bc6-ba22-238abb3c5b32\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.597782 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ns5t\" (UniqueName: \"kubernetes.io/projected/13bbf4f3-8a73-45b8-80f5-52907db710c0-kube-api-access-8ns5t\") pod \"watcher-operator-controller-manager-769dc69bc-7hlb8\" (UID: \"13bbf4f3-8a73-45b8-80f5-52907db710c0\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.597826 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.597844 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8d6b\" (UniqueName: \"kubernetes.io/projected/01685621-3c95-4091-a03a-de8d25c67efd-kube-api-access-s8d6b\") pod \"test-operator-controller-manager-5854674fcc-fj96f\" (UID: \"01685621-3c95-4091-a03a-de8d25c67efd\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" Dec 02 10:29:31 crc kubenswrapper[4711]: E1202 10:29:31.598257 4711 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 10:29:31 crc kubenswrapper[4711]: E1202 10:29:31.598295 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert podName:90b53574-c0b1-4bc6-ba22-238abb3c5b32 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:32.598280637 +0000 UTC m=+962.307647074 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert") pod "infra-operator-controller-manager-57548d458d-tnkm7" (UID: "90b53574-c0b1-4bc6-ba22-238abb3c5b32") : secret "infra-operator-webhook-server-cert" not found Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.605613 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.611882 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.655970 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8d6b\" (UniqueName: \"kubernetes.io/projected/01685621-3c95-4091-a03a-de8d25c67efd-kube-api-access-s8d6b\") pod \"test-operator-controller-manager-5854674fcc-fj96f\" (UID: \"01685621-3c95-4091-a03a-de8d25c67efd\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.657659 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ns5t\" (UniqueName: \"kubernetes.io/projected/13bbf4f3-8a73-45b8-80f5-52907db710c0-kube-api-access-8ns5t\") pod \"watcher-operator-controller-manager-769dc69bc-7hlb8\" (UID: \"13bbf4f3-8a73-45b8-80f5-52907db710c0\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.701695 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv2cn\" (UniqueName: \"kubernetes.io/projected/15eaa14e-a3cd-4e68-8531-741ae62b9d58-kube-api-access-nv2cn\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.701779 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7njvt\" (UniqueName: \"kubernetes.io/projected/2c9ae2aa-9390-409b-b50f-61295577580a-kube-api-access-7njvt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k8d2c\" (UID: \"2c9ae2aa-9390-409b-b50f-61295577580a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.701807 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.701911 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:31 crc kubenswrapper[4711]: E1202 10:29:31.702053 4711 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 10:29:31 crc kubenswrapper[4711]: E1202 10:29:31.702080 4711 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 10:29:31 crc kubenswrapper[4711]: E1202 10:29:31.702138 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs podName:15eaa14e-a3cd-4e68-8531-741ae62b9d58 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:32.20211491 +0000 UTC m=+961.911481357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs") pod "openstack-operator-controller-manager-8f7469895-dzfgg" (UID: "15eaa14e-a3cd-4e68-8531-741ae62b9d58") : secret "metrics-server-cert" not found Dec 02 10:29:31 crc kubenswrapper[4711]: E1202 10:29:31.702156 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs podName:15eaa14e-a3cd-4e68-8531-741ae62b9d58 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:32.202148721 +0000 UTC m=+961.911515168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs") pod "openstack-operator-controller-manager-8f7469895-dzfgg" (UID: "15eaa14e-a3cd-4e68-8531-741ae62b9d58") : secret "webhook-server-cert" not found Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.711556 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.723840 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7njvt\" (UniqueName: \"kubernetes.io/projected/2c9ae2aa-9390-409b-b50f-61295577580a-kube-api-access-7njvt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k8d2c\" (UID: \"2c9ae2aa-9390-409b-b50f-61295577580a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.725669 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv2cn\" (UniqueName: \"kubernetes.io/projected/15eaa14e-a3cd-4e68-8531-741ae62b9d58-kube-api-access-nv2cn\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.729510 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.764838 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.765215 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n"] Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.775743 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.802723 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs\" (UID: \"6ef2b37f-78be-4a19-9d1b-b7d982032aab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:31 crc kubenswrapper[4711]: E1202 10:29:31.802945 4711 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:29:31 crc kubenswrapper[4711]: E1202 10:29:31.803006 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert podName:6ef2b37f-78be-4a19-9d1b-b7d982032aab nodeName:}" failed. No retries permitted until 2025-12-02 10:29:32.802989181 +0000 UTC m=+962.512355628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" (UID: "6ef2b37f-78be-4a19-9d1b-b7d982032aab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.930274 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c" Dec 02 10:29:31 crc kubenswrapper[4711]: I1202 10:29:31.986889 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75"] Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.005877 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2"] Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.085926 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2"] Dec 02 10:29:32 crc kubenswrapper[4711]: W1202 10:29:32.106106 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26eb7b16_7210_459f_baac_e740acdb363e.slice/crio-88195cecf5301067dc9644bee61a9caee7309fc603fa303f7de3eac338423f6a WatchSource:0}: Error finding container 88195cecf5301067dc9644bee61a9caee7309fc603fa303f7de3eac338423f6a: Status 404 returned error can't find the container with id 88195cecf5301067dc9644bee61a9caee7309fc603fa303f7de3eac338423f6a Dec 02 10:29:32 crc kubenswrapper[4711]: W1202 10:29:32.106799 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1984464_0dac_491f_a2f7_bc1f9214fef8.slice/crio-a87484fdc6dc94eccc6b1633f6edac51103bc7e60ce285a417b9958516da3905 WatchSource:0}: Error finding container a87484fdc6dc94eccc6b1633f6edac51103bc7e60ce285a417b9958516da3905: Status 404 returned error can't find the container with id a87484fdc6dc94eccc6b1633f6edac51103bc7e60ce285a417b9958516da3905 Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.120024 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj"] Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.223494 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.223766 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.223893 4711 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.223941 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs podName:15eaa14e-a3cd-4e68-8531-741ae62b9d58 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:33.223925376 +0000 UTC m=+962.933291813 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs") pod "openstack-operator-controller-manager-8f7469895-dzfgg" (UID: "15eaa14e-a3cd-4e68-8531-741ae62b9d58") : secret "webhook-server-cert" not found Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.223965 4711 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.224037 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs podName:15eaa14e-a3cd-4e68-8531-741ae62b9d58 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:33.224017079 +0000 UTC m=+962.933383526 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs") pod "openstack-operator-controller-manager-8f7469895-dzfgg" (UID: "15eaa14e-a3cd-4e68-8531-741ae62b9d58") : secret "metrics-server-cert" not found Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.232734 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k"] Dec 02 10:29:32 crc kubenswrapper[4711]: W1202 10:29:32.243439 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod102348ad_5257_4114_acd6_e0e6c60a3c2b.slice/crio-53659f1b5c2ecd6ddb71f1e83c16dcf20098e4be2413259a48d50eab0800b16c WatchSource:0}: Error finding container 53659f1b5c2ecd6ddb71f1e83c16dcf20098e4be2413259a48d50eab0800b16c: Status 404 returned error can't find the container with id 53659f1b5c2ecd6ddb71f1e83c16dcf20098e4be2413259a48d50eab0800b16c Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.382591 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d"] Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.388575 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr"] Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.393270 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2"] Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.397651 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k"] Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.490223 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-24pj6"] Dec 02 10:29:32 crc kubenswrapper[4711]: W1202 10:29:32.493339 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4109dad_388a_493d_b026_6cd10b9f76dd.slice/crio-513eb882aa9992844616cf4aa5dd91c9965ea5d77d5ffd2082f7ea7aca8b2dae WatchSource:0}: Error finding container 513eb882aa9992844616cf4aa5dd91c9965ea5d77d5ffd2082f7ea7aca8b2dae: Status 404 returned error can't find the container with id 513eb882aa9992844616cf4aa5dd91c9965ea5d77d5ffd2082f7ea7aca8b2dae Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.495505 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28"] Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.502572 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8"] Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.508788 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.155:5001/openstack-k8s-operators/telemetry-operator:f562b9da9be8769d05af2a5399070c99587c6909,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t75d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-999cf8558-p99s8_openstack-operators(4551cf35-cc78-43c0-a468-2e6518e336ff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.510903 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t75d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-999cf8558-p99s8_openstack-operators(4551cf35-cc78-43c0-a468-2e6518e336ff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.512817 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" podUID="4551cf35-cc78-43c0-a468-2e6518e336ff" Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.611814 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8"] Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.617375 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz"] Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.629762 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert\") pod \"infra-operator-controller-manager-57548d458d-tnkm7\" (UID: \"90b53574-c0b1-4bc6-ba22-238abb3c5b32\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.629905 4711 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.629980 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert podName:90b53574-c0b1-4bc6-ba22-238abb3c5b32 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:34.629962762 +0000 UTC m=+964.339329209 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert") pod "infra-operator-controller-manager-57548d458d-tnkm7" (UID: "90b53574-c0b1-4bc6-ba22-238abb3c5b32") : secret "infra-operator-webhook-server-cert" not found Dec 02 10:29:32 crc kubenswrapper[4711]: W1202 10:29:32.630132 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f7481f9_19ef_4b29_95ef_043c7306f5cc.slice/crio-ec006fd66b2d636515a29844cadb602c2a55469d92d6c6bc4b99710902f698bc WatchSource:0}: Error finding container ec006fd66b2d636515a29844cadb602c2a55469d92d6c6bc4b99710902f698bc: Status 404 returned error can't find the container with id ec006fd66b2d636515a29844cadb602c2a55469d92d6c6bc4b99710902f698bc Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.630777 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-fj96f"] Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.633248 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2pf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-xdfmz_openstack-operators(7f7481f9-19ef-4b29-95ef-043c7306f5cc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.635699 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2pf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-xdfmz_openstack-operators(7f7481f9-19ef-4b29-95ef-043c7306f5cc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.636990 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" podUID="7f7481f9-19ef-4b29-95ef-043c7306f5cc" Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.638155 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm"] Dec 02 10:29:32 crc kubenswrapper[4711]: W1202 10:29:32.641118 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01685621_3c95_4091_a03a_de8d25c67efd.slice/crio-1b2767b3c4107abbf78045610a2e1c4100dc23b0a1571b4edf729f17c5dd6f75 WatchSource:0}: Error finding container 1b2767b3c4107abbf78045610a2e1c4100dc23b0a1571b4edf729f17c5dd6f75: Status 404 returned error can't find the container with id 1b2767b3c4107abbf78045610a2e1c4100dc23b0a1571b4edf729f17c5dd6f75 Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.643528 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8d6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-fj96f_openstack-operators(01685621-3c95-4091-a03a-de8d25c67efd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.648646 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8d6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-fj96f_openstack-operators(01685621-3c95-4091-a03a-de8d25c67efd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.649817 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5gk9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-lsgwm_openstack-operators(7e3c4c79-5009-40f8-80f9-0d30bf57cc5a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.649886 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" podUID="01685621-3c95-4091-a03a-de8d25c67efd" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.652648 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5gk9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-lsgwm_openstack-operators(7e3c4c79-5009-40f8-80f9-0d30bf57cc5a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.654302 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" podUID="7e3c4c79-5009-40f8-80f9-0d30bf57cc5a" Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.690969 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj" event={"ID":"59853ec3-31ef-402d-8f5f-c12528b688f0","Type":"ContainerStarted","Data":"2cbf3981edb6d9861841a997b4884c564b5ad4c01953333aa7f4b8fc18c9c4f9"} Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.692284 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr" event={"ID":"9d8cab18-532c-45c8-ba21-6f3bee02c722","Type":"ContainerStarted","Data":"7553123adf7ce86f41ae842d6192658ed1100b67cab7d6a13ffc40415ea4290f"} Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.693259 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" event={"ID":"7f7481f9-19ef-4b29-95ef-043c7306f5cc","Type":"ContainerStarted","Data":"ec006fd66b2d636515a29844cadb602c2a55469d92d6c6bc4b99710902f698bc"} Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.704080 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" podUID="7f7481f9-19ef-4b29-95ef-043c7306f5cc" Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.706552 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75" event={"ID":"26eb7b16-7210-459f-baac-e740acdb363e","Type":"ContainerStarted","Data":"88195cecf5301067dc9644bee61a9caee7309fc603fa303f7de3eac338423f6a"} Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.717185 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" event={"ID":"4551cf35-cc78-43c0-a468-2e6518e336ff","Type":"ContainerStarted","Data":"0461c4e50859c7bc917fefc6720617419af1eb5cf14efef378546a9f5278494b"} Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.724211 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.155:5001/openstack-k8s-operators/telemetry-operator:f562b9da9be8769d05af2a5399070c99587c6909\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" podUID="4551cf35-cc78-43c0-a468-2e6518e336ff" Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.724294 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2" event={"ID":"8284f010-fa2e-45fd-aa0f-46958a91102b","Type":"ContainerStarted","Data":"40620e764471a98212125442cfc5fd4fe90163900eec32035df5eac537c11212"} Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.725275 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-24pj6" event={"ID":"f4109dad-388a-493d-b026-6cd10b9f76dd","Type":"ContainerStarted","Data":"513eb882aa9992844616cf4aa5dd91c9965ea5d77d5ffd2082f7ea7aca8b2dae"} Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.727244 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k" event={"ID":"102348ad-5257-4114-acd6-e0e6c60a3c2b","Type":"ContainerStarted","Data":"53659f1b5c2ecd6ddb71f1e83c16dcf20098e4be2413259a48d50eab0800b16c"} Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.727693 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r"] Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.728122 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28" event={"ID":"0872909b-ee36-482c-a6d7-f6d7ee6cc5ff","Type":"ContainerStarted","Data":"568e2391f2405c658c7b37b384beefabe02f151968dccb0befb8b7a486618984"} Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.728927 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8" event={"ID":"13bbf4f3-8a73-45b8-80f5-52907db710c0","Type":"ContainerStarted","Data":"c38705ef893f7f3e45b1f325de297736a83072994c3364779d2bf91e09a2a65c"} Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.729657 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" event={"ID":"01685621-3c95-4091-a03a-de8d25c67efd","Type":"ContainerStarted","Data":"1b2767b3c4107abbf78045610a2e1c4100dc23b0a1571b4edf729f17c5dd6f75"} Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.731168 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" podUID="01685621-3c95-4091-a03a-de8d25c67efd" Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.731285 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2" event={"ID":"a1984464-0dac-491f-a2f7-bc1f9214fef8","Type":"ContainerStarted","Data":"a87484fdc6dc94eccc6b1633f6edac51103bc7e60ce285a417b9958516da3905"} Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.732339 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2" event={"ID":"1f5bd4c4-1262-47a2-94fb-bce66ebe7929","Type":"ContainerStarted","Data":"80f5325f98e0dbd49babab78e4113dc5e452c1ad9f8882f6053706c23af3daef"} Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.733332 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" event={"ID":"7e3c4c79-5009-40f8-80f9-0d30bf57cc5a","Type":"ContainerStarted","Data":"0537212472e3498c610bbdbc67efe6be15af1ee99bbf10e2e2be874397796ea9"} Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.737071 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" podUID="7e3c4c79-5009-40f8-80f9-0d30bf57cc5a" Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.737840 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n" event={"ID":"10c23c28-0e51-465d-ba7c-1becd6a7b5ee","Type":"ContainerStarted","Data":"0422893e6f8ce59c4b63ef0b992bf44bd7eb07741e46ee551efaa2580a103ef9"} Dec 02 10:29:32 crc kubenswrapper[4711]: W1202 10:29:32.742321 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03d9d400_b25b_4ac4_bad3_55afbae399e4.slice/crio-4da7f7b60bc84a5e298a20e3c3b269b842cea1df177c45e223d4b52dce53b94a WatchSource:0}: Error finding container 4da7f7b60bc84a5e298a20e3c3b269b842cea1df177c45e223d4b52dce53b94a: Status 404 returned error can't find the container with id 4da7f7b60bc84a5e298a20e3c3b269b842cea1df177c45e223d4b52dce53b94a Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.746453 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-798r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-mhr7r_openstack-operators(03d9d400-b25b-4ac4-bad3-55afbae399e4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.747435 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c"] Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.748829 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-798r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-mhr7r_openstack-operators(03d9d400-b25b-4ac4-bad3-55afbae399e4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.749101 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" event={"ID":"68951454-b246-49ab-b604-a62c48e0b2ea","Type":"ContainerStarted","Data":"87301eed25aa2bb8401814bc5a12e06d05ac3804ce7367de97aff324a02c12a8"} Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.749944 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57" event={"ID":"d5039117-0162-4158-b6f7-a3dedff319fb","Type":"ContainerStarted","Data":"84b8911c8aa3b775fce2da1d4a3dd8a9420470703cdb3c4312fc23588dbe03f2"} Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.750104 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" podUID="03d9d400-b25b-4ac4-bad3-55afbae399e4" Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.750679 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d" event={"ID":"0b12ad88-acba-4d9f-82ac-f59d3ca57ac8","Type":"ContainerStarted","Data":"be6a7147a16acf19e3cfde0430375624abb84ca8a87aa356177876192b2b4b94"} Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.763065 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7njvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-k8d2c_openstack-operators(2c9ae2aa-9390-409b-b50f-61295577580a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.765214 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c" podUID="2c9ae2aa-9390-409b-b50f-61295577580a" Dec 02 10:29:32 crc kubenswrapper[4711]: I1202 10:29:32.831998 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs\" (UID: \"6ef2b37f-78be-4a19-9d1b-b7d982032aab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.832158 4711 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:29:32 crc kubenswrapper[4711]: E1202 10:29:32.832207 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert podName:6ef2b37f-78be-4a19-9d1b-b7d982032aab nodeName:}" failed. No retries permitted until 2025-12-02 10:29:34.832190738 +0000 UTC m=+964.541557185 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" (UID: "6ef2b37f-78be-4a19-9d1b-b7d982032aab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:29:33 crc kubenswrapper[4711]: I1202 10:29:33.236810 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:33 crc kubenswrapper[4711]: E1202 10:29:33.237040 4711 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 10:29:33 crc kubenswrapper[4711]: I1202 10:29:33.237231 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:33 crc kubenswrapper[4711]: E1202 10:29:33.237292 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs podName:15eaa14e-a3cd-4e68-8531-741ae62b9d58 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:35.237271538 +0000 UTC m=+964.946637985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs") pod "openstack-operator-controller-manager-8f7469895-dzfgg" (UID: "15eaa14e-a3cd-4e68-8531-741ae62b9d58") : secret "webhook-server-cert" not found Dec 02 10:29:33 crc kubenswrapper[4711]: E1202 10:29:33.237452 4711 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 10:29:33 crc kubenswrapper[4711]: E1202 10:29:33.237539 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs podName:15eaa14e-a3cd-4e68-8531-741ae62b9d58 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:35.237518424 +0000 UTC m=+964.946884871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs") pod "openstack-operator-controller-manager-8f7469895-dzfgg" (UID: "15eaa14e-a3cd-4e68-8531-741ae62b9d58") : secret "metrics-server-cert" not found Dec 02 10:29:33 crc kubenswrapper[4711]: I1202 10:29:33.778134 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c" event={"ID":"2c9ae2aa-9390-409b-b50f-61295577580a","Type":"ContainerStarted","Data":"e1e8cb83398bcb79c2c01f6b1ab060c88b9595887e21c6f4b982d7c7a168412d"} Dec 02 10:29:33 crc kubenswrapper[4711]: E1202 10:29:33.780850 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c" podUID="2c9ae2aa-9390-409b-b50f-61295577580a" Dec 02 10:29:33 crc kubenswrapper[4711]: I1202 10:29:33.781862 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" event={"ID":"03d9d400-b25b-4ac4-bad3-55afbae399e4","Type":"ContainerStarted","Data":"4da7f7b60bc84a5e298a20e3c3b269b842cea1df177c45e223d4b52dce53b94a"} Dec 02 10:29:33 crc kubenswrapper[4711]: E1202 10:29:33.783145 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" podUID="01685621-3c95-4091-a03a-de8d25c67efd" Dec 02 10:29:33 crc kubenswrapper[4711]: E1202 10:29:33.783589 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.155:5001/openstack-k8s-operators/telemetry-operator:f562b9da9be8769d05af2a5399070c99587c6909\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" podUID="4551cf35-cc78-43c0-a468-2e6518e336ff" Dec 02 10:29:33 crc kubenswrapper[4711]: E1202 10:29:33.783633 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" podUID="7f7481f9-19ef-4b29-95ef-043c7306f5cc" Dec 02 10:29:33 crc kubenswrapper[4711]: E1202 10:29:33.783670 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" podUID="03d9d400-b25b-4ac4-bad3-55afbae399e4" Dec 02 10:29:33 crc kubenswrapper[4711]: E1202 10:29:33.783705 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" podUID="7e3c4c79-5009-40f8-80f9-0d30bf57cc5a" Dec 02 10:29:34 crc kubenswrapper[4711]: I1202 10:29:34.671675 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert\") pod \"infra-operator-controller-manager-57548d458d-tnkm7\" (UID: \"90b53574-c0b1-4bc6-ba22-238abb3c5b32\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:34 crc kubenswrapper[4711]: E1202 10:29:34.672032 4711 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 10:29:34 crc kubenswrapper[4711]: E1202 10:29:34.672091 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert podName:90b53574-c0b1-4bc6-ba22-238abb3c5b32 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:38.672073588 +0000 UTC m=+968.381440035 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert") pod "infra-operator-controller-manager-57548d458d-tnkm7" (UID: "90b53574-c0b1-4bc6-ba22-238abb3c5b32") : secret "infra-operator-webhook-server-cert" not found Dec 02 10:29:34 crc kubenswrapper[4711]: E1202 10:29:34.804247 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" podUID="03d9d400-b25b-4ac4-bad3-55afbae399e4" Dec 02 10:29:34 crc kubenswrapper[4711]: E1202 10:29:34.805466 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c" podUID="2c9ae2aa-9390-409b-b50f-61295577580a" Dec 02 10:29:34 crc kubenswrapper[4711]: I1202 10:29:34.874001 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs\" (UID: \"6ef2b37f-78be-4a19-9d1b-b7d982032aab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:34 crc kubenswrapper[4711]: E1202 10:29:34.874194 4711 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:29:34 crc kubenswrapper[4711]: E1202 10:29:34.874255 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert podName:6ef2b37f-78be-4a19-9d1b-b7d982032aab nodeName:}" failed. No retries permitted until 2025-12-02 10:29:38.874239932 +0000 UTC m=+968.583606379 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" (UID: "6ef2b37f-78be-4a19-9d1b-b7d982032aab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:29:35 crc kubenswrapper[4711]: I1202 10:29:35.278710 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:35 crc kubenswrapper[4711]: I1202 10:29:35.278826 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:35 crc kubenswrapper[4711]: E1202 10:29:35.279061 4711 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 10:29:35 crc kubenswrapper[4711]: E1202 10:29:35.279131 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs podName:15eaa14e-a3cd-4e68-8531-741ae62b9d58 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:39.279110715 +0000 UTC m=+968.988477162 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs") pod "openstack-operator-controller-manager-8f7469895-dzfgg" (UID: "15eaa14e-a3cd-4e68-8531-741ae62b9d58") : secret "webhook-server-cert" not found Dec 02 10:29:35 crc kubenswrapper[4711]: E1202 10:29:35.279139 4711 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 10:29:35 crc kubenswrapper[4711]: E1202 10:29:35.279211 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs podName:15eaa14e-a3cd-4e68-8531-741ae62b9d58 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:39.279192867 +0000 UTC m=+968.988559314 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs") pod "openstack-operator-controller-manager-8f7469895-dzfgg" (UID: "15eaa14e-a3cd-4e68-8531-741ae62b9d58") : secret "metrics-server-cert" not found Dec 02 10:29:38 crc kubenswrapper[4711]: I1202 10:29:38.726410 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert\") pod \"infra-operator-controller-manager-57548d458d-tnkm7\" (UID: \"90b53574-c0b1-4bc6-ba22-238abb3c5b32\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:38 crc kubenswrapper[4711]: E1202 10:29:38.726630 4711 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 10:29:38 crc kubenswrapper[4711]: E1202 10:29:38.727034 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert podName:90b53574-c0b1-4bc6-ba22-238abb3c5b32 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:46.726995253 +0000 UTC m=+976.436361780 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert") pod "infra-operator-controller-manager-57548d458d-tnkm7" (UID: "90b53574-c0b1-4bc6-ba22-238abb3c5b32") : secret "infra-operator-webhook-server-cert" not found Dec 02 10:29:38 crc kubenswrapper[4711]: I1202 10:29:38.929106 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs\" (UID: \"6ef2b37f-78be-4a19-9d1b-b7d982032aab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:38 crc kubenswrapper[4711]: E1202 10:29:38.929302 4711 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:29:38 crc kubenswrapper[4711]: E1202 10:29:38.929392 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert podName:6ef2b37f-78be-4a19-9d1b-b7d982032aab nodeName:}" failed. No retries permitted until 2025-12-02 10:29:46.929369473 +0000 UTC m=+976.638735920 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" (UID: "6ef2b37f-78be-4a19-9d1b-b7d982032aab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:29:39 crc kubenswrapper[4711]: I1202 10:29:39.336525 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:39 crc kubenswrapper[4711]: I1202 10:29:39.336839 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:39 crc kubenswrapper[4711]: E1202 10:29:39.336881 4711 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 10:29:39 crc kubenswrapper[4711]: E1202 10:29:39.337036 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs podName:15eaa14e-a3cd-4e68-8531-741ae62b9d58 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:47.336999442 +0000 UTC m=+977.046365959 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs") pod "openstack-operator-controller-manager-8f7469895-dzfgg" (UID: "15eaa14e-a3cd-4e68-8531-741ae62b9d58") : secret "webhook-server-cert" not found Dec 02 10:29:39 crc kubenswrapper[4711]: E1202 10:29:39.337280 4711 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 10:29:39 crc kubenswrapper[4711]: E1202 10:29:39.337407 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs podName:15eaa14e-a3cd-4e68-8531-741ae62b9d58 nodeName:}" failed. No retries permitted until 2025-12-02 10:29:47.337373673 +0000 UTC m=+977.046740150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs") pod "openstack-operator-controller-manager-8f7469895-dzfgg" (UID: "15eaa14e-a3cd-4e68-8531-741ae62b9d58") : secret "metrics-server-cert" not found Dec 02 10:29:46 crc kubenswrapper[4711]: I1202 10:29:46.738835 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert\") pod \"infra-operator-controller-manager-57548d458d-tnkm7\" (UID: \"90b53574-c0b1-4bc6-ba22-238abb3c5b32\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:46 crc kubenswrapper[4711]: I1202 10:29:46.747584 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90b53574-c0b1-4bc6-ba22-238abb3c5b32-cert\") pod \"infra-operator-controller-manager-57548d458d-tnkm7\" (UID: \"90b53574-c0b1-4bc6-ba22-238abb3c5b32\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:46 crc kubenswrapper[4711]: I1202 10:29:46.873088 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-49lww" Dec 02 10:29:46 crc kubenswrapper[4711]: I1202 10:29:46.880958 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:29:46 crc kubenswrapper[4711]: I1202 10:29:46.943123 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs\" (UID: \"6ef2b37f-78be-4a19-9d1b-b7d982032aab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:46 crc kubenswrapper[4711]: I1202 10:29:46.948588 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ef2b37f-78be-4a19-9d1b-b7d982032aab-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs\" (UID: \"6ef2b37f-78be-4a19-9d1b-b7d982032aab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:47 crc kubenswrapper[4711]: I1202 10:29:47.158791 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:29:47 crc kubenswrapper[4711]: I1202 10:29:47.349902 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:47 crc kubenswrapper[4711]: I1202 10:29:47.350091 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:47 crc kubenswrapper[4711]: I1202 10:29:47.363132 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-webhook-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:47 crc kubenswrapper[4711]: I1202 10:29:47.365100 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15eaa14e-a3cd-4e68-8531-741ae62b9d58-metrics-certs\") pod \"openstack-operator-controller-manager-8f7469895-dzfgg\" (UID: \"15eaa14e-a3cd-4e68-8531-741ae62b9d58\") " pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:47 crc kubenswrapper[4711]: I1202 10:29:47.501887 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:29:51 crc kubenswrapper[4711]: E1202 10:29:51.731641 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" Dec 02 10:29:51 crc kubenswrapper[4711]: E1202 10:29:51.732407 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gws7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-ktj75_openstack-operators(26eb7b16-7210-459f-baac-e740acdb363e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:29:52 crc kubenswrapper[4711]: I1202 10:29:52.585551 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:29:52 crc kubenswrapper[4711]: I1202 10:29:52.585721 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:29:52 crc kubenswrapper[4711]: I1202 10:29:52.585814 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:29:52 crc kubenswrapper[4711]: I1202 10:29:52.586921 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c4568791abe9bd7256ecd483bef73160af4505d06199fa89bd749115edf5f3a"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:29:52 crc kubenswrapper[4711]: I1202 10:29:52.587111 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://8c4568791abe9bd7256ecd483bef73160af4505d06199fa89bd749115edf5f3a" gracePeriod=600 Dec 02 10:29:52 crc kubenswrapper[4711]: I1202 10:29:52.953015 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="8c4568791abe9bd7256ecd483bef73160af4505d06199fa89bd749115edf5f3a" exitCode=0 Dec 02 10:29:52 crc kubenswrapper[4711]: I1202 10:29:52.953077 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"8c4568791abe9bd7256ecd483bef73160af4505d06199fa89bd749115edf5f3a"} Dec 02 10:29:52 crc kubenswrapper[4711]: I1202 10:29:52.953146 4711 scope.go:117] "RemoveContainer" containerID="e29911e1e38ecebeadbbef681ae791a5f19b2f30398553fbf3d8e99e960526fb" Dec 02 10:29:52 crc kubenswrapper[4711]: E1202 10:29:52.995514 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Dec 02 10:29:52 crc kubenswrapper[4711]: E1202 10:29:52.996060 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9rlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-zsv2n_openstack-operators(10c23c28-0e51-465d-ba7c-1becd6a7b5ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:29:53 crc kubenswrapper[4711]: E1202 10:29:53.900817 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Dec 02 10:29:53 crc kubenswrapper[4711]: E1202 10:29:53.901254 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n744f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-cwr6k_openstack-operators(102348ad-5257-4114-acd6-e0e6c60a3c2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:29:55 crc kubenswrapper[4711]: E1202 10:29:55.836247 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" Dec 02 10:29:55 crc kubenswrapper[4711]: E1202 10:29:55.836669 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgjtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-n9x57_openstack-operators(d5039117-0162-4158-b6f7-a3dedff319fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:29:56 crc kubenswrapper[4711]: E1202 10:29:56.499212 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 02 10:29:56 crc kubenswrapper[4711]: E1202 10:29:56.499426 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f464r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-24pj6_openstack-operators(f4109dad-388a-493d-b026-6cd10b9f76dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:29:57 crc kubenswrapper[4711]: E1202 10:29:57.841088 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 02 10:29:57 crc kubenswrapper[4711]: E1202 10:29:57.841325 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-shnbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-xkq8d_openstack-operators(0b12ad88-acba-4d9f-82ac-f59d3ca57ac8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.155119 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m"] Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.157324 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.160729 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.162017 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.164919 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m"] Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.253827 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2ht5\" (UniqueName: \"kubernetes.io/projected/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-kube-api-access-q2ht5\") pod \"collect-profiles-29411190-jgx5m\" (UID: \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.253890 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-secret-volume\") pod \"collect-profiles-29411190-jgx5m\" (UID: \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.253931 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-config-volume\") pod \"collect-profiles-29411190-jgx5m\" (UID: \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.355307 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2ht5\" (UniqueName: \"kubernetes.io/projected/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-kube-api-access-q2ht5\") pod \"collect-profiles-29411190-jgx5m\" (UID: \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.355369 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-secret-volume\") pod \"collect-profiles-29411190-jgx5m\" (UID: \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.355394 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-config-volume\") pod \"collect-profiles-29411190-jgx5m\" (UID: \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.356260 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-config-volume\") pod \"collect-profiles-29411190-jgx5m\" (UID: \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.369801 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-secret-volume\") pod \"collect-profiles-29411190-jgx5m\" (UID: \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.371990 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2ht5\" (UniqueName: \"kubernetes.io/projected/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-kube-api-access-q2ht5\") pod \"collect-profiles-29411190-jgx5m\" (UID: \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:00 crc kubenswrapper[4711]: I1202 10:30:00.475772 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:02 crc kubenswrapper[4711]: E1202 10:30:02.468343 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 02 10:30:02 crc kubenswrapper[4711]: E1202 10:30:02.469447 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mcmd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-wdt5k_openstack-operators(68951454-b246-49ab-b604-a62c48e0b2ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:30:03 crc kubenswrapper[4711]: I1202 10:30:03.968516 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg"] Dec 02 10:30:04 crc kubenswrapper[4711]: I1202 10:30:04.030853 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7"] Dec 02 10:30:04 crc kubenswrapper[4711]: I1202 10:30:04.037974 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs"] Dec 02 10:30:04 crc kubenswrapper[4711]: I1202 10:30:04.041897 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"7b9ab21e8bb7413840e645c998ba8a37411c45606ceeecfb5d6d1574a7966068"} Dec 02 10:30:04 crc kubenswrapper[4711]: I1202 10:30:04.154858 4711 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:30:04 crc kubenswrapper[4711]: I1202 10:30:04.317566 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m"] Dec 02 10:30:04 crc kubenswrapper[4711]: W1202 10:30:04.441662 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15eaa14e_a3cd_4e68_8531_741ae62b9d58.slice/crio-666c1658bbe7a22a97bf8919535132fddd01fea916ed758ce00684dc21fae9d6 WatchSource:0}: Error finding container 666c1658bbe7a22a97bf8919535132fddd01fea916ed758ce00684dc21fae9d6: Status 404 returned error can't find the container with id 666c1658bbe7a22a97bf8919535132fddd01fea916ed758ce00684dc21fae9d6 Dec 02 10:30:04 crc kubenswrapper[4711]: W1202 10:30:04.462865 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c6ff6c5_8745_4f8d_83af_83ded31e5f85.slice/crio-53a022a7e5e85daf986627217274af559ab0ecb44542efdddc8927ff04ba7a99 WatchSource:0}: Error finding container 53a022a7e5e85daf986627217274af559ab0ecb44542efdddc8927ff04ba7a99: Status 404 returned error can't find the container with id 53a022a7e5e85daf986627217274af559ab0ecb44542efdddc8927ff04ba7a99 Dec 02 10:30:04 crc kubenswrapper[4711]: W1202 10:30:04.481491 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ef2b37f_78be_4a19_9d1b_b7d982032aab.slice/crio-177ea2d9f738c54852e7fadca9abf27d33f26484e751e48ac498e12adab11f4e WatchSource:0}: Error finding container 177ea2d9f738c54852e7fadca9abf27d33f26484e751e48ac498e12adab11f4e: Status 404 returned error can't find the container with id 177ea2d9f738c54852e7fadca9abf27d33f26484e751e48ac498e12adab11f4e Dec 02 10:30:05 crc kubenswrapper[4711]: I1202 10:30:05.056983 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2" event={"ID":"8284f010-fa2e-45fd-aa0f-46958a91102b","Type":"ContainerStarted","Data":"e958ddccfdd61a51bc57beafda1b3b6b3e86b42fd3d303aaeaadb9126879569b"} Dec 02 10:30:05 crc kubenswrapper[4711]: I1202 10:30:05.059003 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" event={"ID":"0c6ff6c5-8745-4f8d-83af-83ded31e5f85","Type":"ContainerStarted","Data":"53a022a7e5e85daf986627217274af559ab0ecb44542efdddc8927ff04ba7a99"} Dec 02 10:30:05 crc kubenswrapper[4711]: I1202 10:30:05.060267 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" event={"ID":"15eaa14e-a3cd-4e68-8531-741ae62b9d58","Type":"ContainerStarted","Data":"666c1658bbe7a22a97bf8919535132fddd01fea916ed758ce00684dc21fae9d6"} Dec 02 10:30:05 crc kubenswrapper[4711]: I1202 10:30:05.061229 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr" event={"ID":"9d8cab18-532c-45c8-ba21-6f3bee02c722","Type":"ContainerStarted","Data":"21dd853edec0c08b2327470dec12dc862b6e16b6db19c8afce4940005e0253b7"} Dec 02 10:30:05 crc kubenswrapper[4711]: I1202 10:30:05.061906 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" event={"ID":"90b53574-c0b1-4bc6-ba22-238abb3c5b32","Type":"ContainerStarted","Data":"a902f618c6c90e424632bdb1d727b542904b1fb3afa20e148f7c51313eb39993"} Dec 02 10:30:05 crc kubenswrapper[4711]: I1202 10:30:05.062928 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8" event={"ID":"13bbf4f3-8a73-45b8-80f5-52907db710c0","Type":"ContainerStarted","Data":"2b9e85bc5fe4241dfaef163d53e89b550ccaedc326e7899e87b1a24587142653"} Dec 02 10:30:05 crc kubenswrapper[4711]: I1202 10:30:05.077221 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj" event={"ID":"59853ec3-31ef-402d-8f5f-c12528b688f0","Type":"ContainerStarted","Data":"4f36f73e678d47bbd0e3d38a3ae975fd707852a0d1dcbe681d6644fdac2d4125"} Dec 02 10:30:05 crc kubenswrapper[4711]: I1202 10:30:05.090753 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" event={"ID":"03d9d400-b25b-4ac4-bad3-55afbae399e4","Type":"ContainerStarted","Data":"ccdec27f2d4e9c86757cf9aa53e56007b527a48df79ad202b1bd59e638cbc63b"} Dec 02 10:30:05 crc kubenswrapper[4711]: I1202 10:30:05.090789 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2" event={"ID":"a1984464-0dac-491f-a2f7-bc1f9214fef8","Type":"ContainerStarted","Data":"838c19ba734e58a3d2dea704af16618b4a42dbefa0350aeda0990d9e910b8197"} Dec 02 10:30:05 crc kubenswrapper[4711]: I1202 10:30:05.090799 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" event={"ID":"6ef2b37f-78be-4a19-9d1b-b7d982032aab","Type":"ContainerStarted","Data":"177ea2d9f738c54852e7fadca9abf27d33f26484e751e48ac498e12adab11f4e"} Dec 02 10:30:05 crc kubenswrapper[4711]: I1202 10:30:05.090810 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2" event={"ID":"1f5bd4c4-1262-47a2-94fb-bce66ebe7929","Type":"ContainerStarted","Data":"1698c1381d5a5031fa927abe023af7b55b5f9fcb0ae236a7e045e66b5eaa3865"} Dec 02 10:30:05 crc kubenswrapper[4711]: I1202 10:30:05.090820 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28" event={"ID":"0872909b-ee36-482c-a6d7-f6d7ee6cc5ff","Type":"ContainerStarted","Data":"4f5fa162a96c6529235e4c8d4c0ab253b9efb1b39625c2f897367a58af876bd3"} Dec 02 10:30:06 crc kubenswrapper[4711]: I1202 10:30:06.104879 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" event={"ID":"7e3c4c79-5009-40f8-80f9-0d30bf57cc5a","Type":"ContainerStarted","Data":"ed183871f1b041ff6f37dfb9f5f08617266e8625ee209d3546c6b98c155c0395"} Dec 02 10:30:06 crc kubenswrapper[4711]: I1202 10:30:06.106664 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" event={"ID":"01685621-3c95-4091-a03a-de8d25c67efd","Type":"ContainerStarted","Data":"c226daaaacf8c6dc2503bcf0e38b5e24bed1171a87218684bce2b9238767c6a6"} Dec 02 10:30:06 crc kubenswrapper[4711]: I1202 10:30:06.107943 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" event={"ID":"7f7481f9-19ef-4b29-95ef-043c7306f5cc","Type":"ContainerStarted","Data":"95eab8e0e9a8993ce5882ca93db36afaf3d45027e8e75c1b6c2c5ab30dc20949"} Dec 02 10:30:08 crc kubenswrapper[4711]: I1202 10:30:08.122380 4711 generic.go:334] "Generic (PLEG): container finished" podID="0c6ff6c5-8745-4f8d-83af-83ded31e5f85" containerID="31fd1083de2f86e4a7c22037ff483048dd71d9faebaf7b28b1e4d031a55e8e56" exitCode=0 Dec 02 10:30:08 crc kubenswrapper[4711]: I1202 10:30:08.122656 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" event={"ID":"0c6ff6c5-8745-4f8d-83af-83ded31e5f85","Type":"ContainerDied","Data":"31fd1083de2f86e4a7c22037ff483048dd71d9faebaf7b28b1e4d031a55e8e56"} Dec 02 10:30:08 crc kubenswrapper[4711]: I1202 10:30:08.126291 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" event={"ID":"15eaa14e-a3cd-4e68-8531-741ae62b9d58","Type":"ContainerStarted","Data":"530795f210b7a9bebe9c96726e1b2f93ab834114e48b3aa7f84d2135d139f626"} Dec 02 10:30:08 crc kubenswrapper[4711]: I1202 10:30:08.126870 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.138988 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c" event={"ID":"2c9ae2aa-9390-409b-b50f-61295577580a","Type":"ContainerStarted","Data":"73ef98d9d9ff7b5b5d14a74321d5d1c93902a2284765b6805d2fdee2e39814b6"} Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.143410 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" event={"ID":"4551cf35-cc78-43c0-a468-2e6518e336ff","Type":"ContainerStarted","Data":"e416d6f5d3faed1c2962c4c8b122ee69407faa16eaf656e853fe2cbd8f9531a9"} Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.161520 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k8d2c" podStartSLOduration=6.440794294 podStartE2EDuration="38.161466136s" podCreationTimestamp="2025-12-02 10:29:31 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.762848463 +0000 UTC m=+962.472214910" lastFinishedPulling="2025-12-02 10:30:04.483520305 +0000 UTC m=+994.192886752" observedRunningTime="2025-12-02 10:30:09.159853752 +0000 UTC m=+998.869220199" watchObservedRunningTime="2025-12-02 10:30:09.161466136 +0000 UTC m=+998.870832583" Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.165108 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" podStartSLOduration=38.165094415 podStartE2EDuration="38.165094415s" podCreationTimestamp="2025-12-02 10:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:30:08.169976525 +0000 UTC m=+997.879342992" watchObservedRunningTime="2025-12-02 10:30:09.165094415 +0000 UTC m=+998.874460862" Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.518742 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.616042 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-secret-volume\") pod \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\" (UID: \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\") " Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.616095 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-config-volume\") pod \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\" (UID: \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\") " Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.616162 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2ht5\" (UniqueName: \"kubernetes.io/projected/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-kube-api-access-q2ht5\") pod \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\" (UID: \"0c6ff6c5-8745-4f8d-83af-83ded31e5f85\") " Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.618090 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-config-volume" (OuterVolumeSpecName: "config-volume") pod "0c6ff6c5-8745-4f8d-83af-83ded31e5f85" (UID: "0c6ff6c5-8745-4f8d-83af-83ded31e5f85"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.631773 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0c6ff6c5-8745-4f8d-83af-83ded31e5f85" (UID: "0c6ff6c5-8745-4f8d-83af-83ded31e5f85"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.636071 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-kube-api-access-q2ht5" (OuterVolumeSpecName: "kube-api-access-q2ht5") pod "0c6ff6c5-8745-4f8d-83af-83ded31e5f85" (UID: "0c6ff6c5-8745-4f8d-83af-83ded31e5f85"). InnerVolumeSpecName "kube-api-access-q2ht5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.717344 4711 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.717375 4711 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:30:09 crc kubenswrapper[4711]: I1202 10:30:09.717386 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2ht5\" (UniqueName: \"kubernetes.io/projected/0c6ff6c5-8745-4f8d-83af-83ded31e5f85-kube-api-access-q2ht5\") on node \"crc\" DevicePath \"\"" Dec 02 10:30:09 crc kubenswrapper[4711]: E1202 10:30:09.766459 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75" podUID="26eb7b16-7210-459f-baac-e740acdb363e" Dec 02 10:30:09 crc kubenswrapper[4711]: E1202 10:30:09.766625 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k" podUID="102348ad-5257-4114-acd6-e0e6c60a3c2b" Dec 02 10:30:09 crc kubenswrapper[4711]: E1202 10:30:09.768430 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57" podUID="d5039117-0162-4158-b6f7-a3dedff319fb" Dec 02 10:30:09 crc kubenswrapper[4711]: E1202 10:30:09.770980 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n" podUID="10c23c28-0e51-465d-ba7c-1becd6a7b5ee" Dec 02 10:30:09 crc kubenswrapper[4711]: E1202 10:30:09.827392 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" podUID="68951454-b246-49ab-b604-a62c48e0b2ea" Dec 02 10:30:09 crc kubenswrapper[4711]: E1202 10:30:09.840983 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-24pj6" podUID="f4109dad-388a-493d-b026-6cd10b9f76dd" Dec 02 10:30:10 crc kubenswrapper[4711]: E1202 10:30:10.132146 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d" podUID="0b12ad88-acba-4d9f-82ac-f59d3ca57ac8" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.166706 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.170518 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.181239 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n" event={"ID":"10c23c28-0e51-465d-ba7c-1becd6a7b5ee","Type":"ContainerStarted","Data":"5409af4134965cde628bbf3713067e5ef9eb412686e210d42b46cb636910e3d7"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.191363 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" event={"ID":"68951454-b246-49ab-b604-a62c48e0b2ea","Type":"ContainerStarted","Data":"1917b509a3bbb961dea3b8396d6a9d4015dcb132bcc34fec94eff876d2b66d47"} Dec 02 10:30:10 crc kubenswrapper[4711]: E1202 10:30:10.193862 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" podUID="68951454-b246-49ab-b604-a62c48e0b2ea" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.196218 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr" event={"ID":"9d8cab18-532c-45c8-ba21-6f3bee02c722","Type":"ContainerStarted","Data":"83643986f8d99174f537ccff714f3924496751518a6025d8ccd48549d25e7e52"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.196807 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.199318 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2" podStartSLOduration=3.306223003 podStartE2EDuration="40.199302459s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.107830177 +0000 UTC m=+961.817196624" lastFinishedPulling="2025-12-02 10:30:09.000909593 +0000 UTC m=+998.710276080" observedRunningTime="2025-12-02 10:30:10.194010356 +0000 UTC m=+999.903376803" watchObservedRunningTime="2025-12-02 10:30:10.199302459 +0000 UTC m=+999.908668906" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.202801 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.217474 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k" event={"ID":"102348ad-5257-4114-acd6-e0e6c60a3c2b","Type":"ContainerStarted","Data":"b4c797e4f22d042cf3fc704ef94fd1818229ecdfe8139f1576ae773caf6f2aa5"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.230872 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" event={"ID":"6ef2b37f-78be-4a19-9d1b-b7d982032aab","Type":"ContainerStarted","Data":"c8ed21c72080c9ad95ce08d964a053f834b24f4b8e3f7b6d02f3db275e2c4987"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.251187 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28" event={"ID":"0872909b-ee36-482c-a6d7-f6d7ee6cc5ff","Type":"ContainerStarted","Data":"11bafb8d736ac1ccb29cb50e3fa4b9f3dedaf373fa6b3a40f2a35f54bcfac049"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.252246 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.260916 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.273359 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8" event={"ID":"13bbf4f3-8a73-45b8-80f5-52907db710c0","Type":"ContainerStarted","Data":"1aff5085f4736426dd14acd081252830c4f47e15bf5a4e4d900631907e1385a2"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.273706 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.280915 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.292645 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2" event={"ID":"1f5bd4c4-1262-47a2-94fb-bce66ebe7929","Type":"ContainerStarted","Data":"2f7f24378f7b7b475858f9534735a77f363473e533af744cb1c77f3cc6fc813f"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.293414 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.299692 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.310236 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" event={"ID":"90b53574-c0b1-4bc6-ba22-238abb3c5b32","Type":"ContainerStarted","Data":"664221e7e5e77807a6a59158ac867cb425feee1658cd825533e4098087e131db"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.343397 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" event={"ID":"7f7481f9-19ef-4b29-95ef-043c7306f5cc","Type":"ContainerStarted","Data":"d0d2ff33b6b62909a365ae358b31b7102b93be0eeb9cf7888b6c0039f9d01591"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.345315 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.352583 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.354103 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7s28" podStartSLOduration=2.819144199 podStartE2EDuration="39.354091617s" podCreationTimestamp="2025-12-02 10:29:31 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.500906917 +0000 UTC m=+962.210273364" lastFinishedPulling="2025-12-02 10:30:09.035854335 +0000 UTC m=+998.745220782" observedRunningTime="2025-12-02 10:30:10.352422931 +0000 UTC m=+1000.061789378" watchObservedRunningTime="2025-12-02 10:30:10.354091617 +0000 UTC m=+1000.063458064" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.356148 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-99nd2" podStartSLOduration=3.805622375 podStartE2EDuration="40.356141753s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.385249639 +0000 UTC m=+962.094616086" lastFinishedPulling="2025-12-02 10:30:08.935768997 +0000 UTC m=+998.645135464" observedRunningTime="2025-12-02 10:30:10.320505832 +0000 UTC m=+1000.029872279" watchObservedRunningTime="2025-12-02 10:30:10.356141753 +0000 UTC m=+1000.065508190" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.395174 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d" event={"ID":"0b12ad88-acba-4d9f-82ac-f59d3ca57ac8","Type":"ContainerStarted","Data":"b8936ae7dd814e6c3a34d797baa48d23c43539ef31a7d1a24007d7fb94f9f44c"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.396840 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-7hlb8" podStartSLOduration=3.017681966 podStartE2EDuration="39.396822351s" podCreationTimestamp="2025-12-02 10:29:31 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.621188491 +0000 UTC m=+962.330554938" lastFinishedPulling="2025-12-02 10:30:09.000328836 +0000 UTC m=+998.709695323" observedRunningTime="2025-12-02 10:30:10.386249573 +0000 UTC m=+1000.095616020" watchObservedRunningTime="2025-12-02 10:30:10.396822351 +0000 UTC m=+1000.106188798" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.432358 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" event={"ID":"0c6ff6c5-8745-4f8d-83af-83ded31e5f85","Type":"ContainerDied","Data":"53a022a7e5e85daf986627217274af559ab0ecb44542efdddc8927ff04ba7a99"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.432409 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53a022a7e5e85daf986627217274af559ab0ecb44542efdddc8927ff04ba7a99" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.432479 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.445526 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-5nkxr" podStartSLOduration=3.819813089 podStartE2EDuration="40.445512897s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.376282813 +0000 UTC m=+962.085649250" lastFinishedPulling="2025-12-02 10:30:09.001982591 +0000 UTC m=+998.711349058" observedRunningTime="2025-12-02 10:30:10.445455215 +0000 UTC m=+1000.154821662" watchObservedRunningTime="2025-12-02 10:30:10.445512897 +0000 UTC m=+1000.154879344" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.450889 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-24pj6" event={"ID":"f4109dad-388a-493d-b026-6cd10b9f76dd","Type":"ContainerStarted","Data":"d802e0931b2c43dc7009c4ea56f50034f849b80087ab55ea3a125c7b3732f4a1"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.473113 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2" event={"ID":"a1984464-0dac-491f-a2f7-bc1f9214fef8","Type":"ContainerStarted","Data":"b01efadd9b34654dcbf7a68f61c05a27102c40b8b09bc5f7141014aeade56cf4"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.476299 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.480067 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.482276 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" event={"ID":"7e3c4c79-5009-40f8-80f9-0d30bf57cc5a","Type":"ContainerStarted","Data":"52d7122a6d040b7156dd0afe1ffa66dd4c65275e74ac6c9f3c0e6466eab0a66d"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.483369 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.485029 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj" event={"ID":"59853ec3-31ef-402d-8f5f-c12528b688f0","Type":"ContainerStarted","Data":"d52b4cd0b575ebcc4e6e9e30e49ae954badd0e83260789f6744c9e296c9de12d"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.485876 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.486791 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57" event={"ID":"d5039117-0162-4158-b6f7-a3dedff319fb","Type":"ContainerStarted","Data":"4b0d753d92deb5e3ef1790d6cebe3838306bf9d572814acf9d79407420c3fdee"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.502204 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.502356 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.512228 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75" event={"ID":"26eb7b16-7210-459f-baac-e740acdb363e","Type":"ContainerStarted","Data":"de7292a2ea719134d1481a361f78a207e3c942379243ced439ddbb99e9c1b1d8"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.536632 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" event={"ID":"4551cf35-cc78-43c0-a468-2e6518e336ff","Type":"ContainerStarted","Data":"500f26537af8b889c6dba8913548f157812ba5ebad834862a51b8a7569cc8c47"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.537080 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-xdfmz" podStartSLOduration=3.061169134 podStartE2EDuration="39.537057682s" podCreationTimestamp="2025-12-02 10:29:31 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.633004546 +0000 UTC m=+962.342371023" lastFinishedPulling="2025-12-02 10:30:09.108893104 +0000 UTC m=+998.818259571" observedRunningTime="2025-12-02 10:30:10.525515256 +0000 UTC m=+1000.234881703" watchObservedRunningTime="2025-12-02 10:30:10.537057682 +0000 UTC m=+1000.246424129" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.537974 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.555885 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-pbvd2" podStartSLOduration=3.714865212 podStartE2EDuration="40.555866213s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.128461384 +0000 UTC m=+961.837827821" lastFinishedPulling="2025-12-02 10:30:08.969462375 +0000 UTC m=+998.678828822" observedRunningTime="2025-12-02 10:30:10.5502406 +0000 UTC m=+1000.259607047" watchObservedRunningTime="2025-12-02 10:30:10.555866213 +0000 UTC m=+1000.265232660" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.569357 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" event={"ID":"01685621-3c95-4091-a03a-de8d25c67efd","Type":"ContainerStarted","Data":"1c39f332614ce475e9ad0dc6bb15d801114782cca629187317e9076597a79830"} Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.569400 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.579243 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.644526 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bqhpj" podStartSLOduration=3.883910606 podStartE2EDuration="40.644510698s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.164405662 +0000 UTC m=+961.873772109" lastFinishedPulling="2025-12-02 10:30:08.925005744 +0000 UTC m=+998.634372201" observedRunningTime="2025-12-02 10:30:10.604234062 +0000 UTC m=+1000.313600509" watchObservedRunningTime="2025-12-02 10:30:10.644510698 +0000 UTC m=+1000.353877145" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.743615 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-fj96f" podStartSLOduration=3.386757346 podStartE2EDuration="39.743599688s" podCreationTimestamp="2025-12-02 10:29:31 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.643372241 +0000 UTC m=+962.352738688" lastFinishedPulling="2025-12-02 10:30:09.000214573 +0000 UTC m=+998.709581030" observedRunningTime="2025-12-02 10:30:10.742028165 +0000 UTC m=+1000.451394612" watchObservedRunningTime="2025-12-02 10:30:10.743599688 +0000 UTC m=+1000.452966135" Dec 02 10:30:10 crc kubenswrapper[4711]: I1202 10:30:10.745422 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-lsgwm" podStartSLOduration=4.385315267 podStartE2EDuration="40.745416158s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.649706994 +0000 UTC m=+962.359073441" lastFinishedPulling="2025-12-02 10:30:09.009807885 +0000 UTC m=+998.719174332" observedRunningTime="2025-12-02 10:30:10.7091581 +0000 UTC m=+1000.418524547" watchObservedRunningTime="2025-12-02 10:30:10.745416158 +0000 UTC m=+1000.454782605" Dec 02 10:30:11 crc kubenswrapper[4711]: I1202 10:30:11.111247 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" podStartSLOduration=7.950526794 podStartE2EDuration="40.111230614s" podCreationTimestamp="2025-12-02 10:29:31 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.508551146 +0000 UTC m=+962.217917583" lastFinishedPulling="2025-12-02 10:30:04.669254946 +0000 UTC m=+994.378621403" observedRunningTime="2025-12-02 10:30:10.767423677 +0000 UTC m=+1000.476790124" watchObservedRunningTime="2025-12-02 10:30:11.111230614 +0000 UTC m=+1000.820597061" Dec 02 10:30:11 crc kubenswrapper[4711]: I1202 10:30:11.577263 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" event={"ID":"90b53574-c0b1-4bc6-ba22-238abb3c5b32","Type":"ContainerStarted","Data":"a74daf9e1fab6cc20f4d74d7b52e481b2dd9533e5ec4255d8f7e5c1761ffc974"} Dec 02 10:30:11 crc kubenswrapper[4711]: I1202 10:30:11.580249 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v98q2" event={"ID":"8284f010-fa2e-45fd-aa0f-46958a91102b","Type":"ContainerStarted","Data":"da6928c2da360d90a94d9c2f566d5ac94697379076da2cb0e4024443821f45ec"} Dec 02 10:30:11 crc kubenswrapper[4711]: E1202 10:30:11.585091 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" podUID="68951454-b246-49ab-b604-a62c48e0b2ea" Dec 02 10:30:15 crc kubenswrapper[4711]: I1202 10:30:15.612689 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" event={"ID":"6ef2b37f-78be-4a19-9d1b-b7d982032aab","Type":"ContainerStarted","Data":"af6610b3c8bee1074b2ec085480798fa469efa243b9dbca995c3a9868baf2f98"} Dec 02 10:30:15 crc kubenswrapper[4711]: I1202 10:30:15.613943 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" event={"ID":"03d9d400-b25b-4ac4-bad3-55afbae399e4","Type":"ContainerStarted","Data":"f98a874ab2805c4ed61f5786c7c21c673c62c2a039495111ea081b2d5cba14b7"} Dec 02 10:30:17 crc kubenswrapper[4711]: I1202 10:30:17.513190 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-8f7469895-dzfgg" Dec 02 10:30:21 crc kubenswrapper[4711]: I1202 10:30:21.665983 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:30:21 crc kubenswrapper[4711]: I1202 10:30:21.670916 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" Dec 02 10:30:21 crc kubenswrapper[4711]: I1202 10:30:21.684046 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tnkm7" podStartSLOduration=47.210105355 podStartE2EDuration="51.684031367s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:30:04.46459083 +0000 UTC m=+994.173957267" lastFinishedPulling="2025-12-02 10:30:08.938516842 +0000 UTC m=+998.647883279" observedRunningTime="2025-12-02 10:30:21.6837576 +0000 UTC m=+1011.393124067" watchObservedRunningTime="2025-12-02 10:30:21.684031367 +0000 UTC m=+1011.393397814" Dec 02 10:30:21 crc kubenswrapper[4711]: I1202 10:30:21.782020 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-999cf8558-p99s8" Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.683666 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-24pj6" event={"ID":"f4109dad-388a-493d-b026-6cd10b9f76dd","Type":"ContainerStarted","Data":"f9ca261f6e9313ba77ba66a1e440bb356e08dba5482ca7a4d635dd42a3c175d0"} Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.684114 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-24pj6" Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.688125 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57" event={"ID":"d5039117-0162-4158-b6f7-a3dedff319fb","Type":"ContainerStarted","Data":"47408c6cca0c41b4335553a95e72d728617f191bc65fd0d9d6eb1c2813b3874e"} Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.688623 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57" Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.694917 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d" event={"ID":"0b12ad88-acba-4d9f-82ac-f59d3ca57ac8","Type":"ContainerStarted","Data":"924a9f6865f937c0dcd84b7b495835d53754d62b248e0dab3389c885d492e2d5"} Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.695324 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d" Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.697967 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k" event={"ID":"102348ad-5257-4114-acd6-e0e6c60a3c2b","Type":"ContainerStarted","Data":"3cd9cf12d674bc51c017f06cb3b0bab57222983c6eae6481b36483ddbfbf2b07"} Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.698463 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.698507 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k" Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.707583 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.723487 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-24pj6" podStartSLOduration=2.963308333 podStartE2EDuration="53.723471868s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.498466199 +0000 UTC m=+962.207832646" lastFinishedPulling="2025-12-02 10:30:23.258629684 +0000 UTC m=+1012.967996181" observedRunningTime="2025-12-02 10:30:23.70779227 +0000 UTC m=+1013.417158727" watchObservedRunningTime="2025-12-02 10:30:23.723471868 +0000 UTC m=+1013.432838325" Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.725270 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d" podStartSLOduration=2.819229098 podStartE2EDuration="53.725259586s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.36781509 +0000 UTC m=+962.077181537" lastFinishedPulling="2025-12-02 10:30:23.273845578 +0000 UTC m=+1012.983212025" observedRunningTime="2025-12-02 10:30:23.720431434 +0000 UTC m=+1013.429797891" watchObservedRunningTime="2025-12-02 10:30:23.725259586 +0000 UTC m=+1013.434626063" Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.734831 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57" podStartSLOduration=2.303093272 podStartE2EDuration="53.734819987s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:31.827171276 +0000 UTC m=+961.536537723" lastFinishedPulling="2025-12-02 10:30:23.258897981 +0000 UTC m=+1012.968264438" observedRunningTime="2025-12-02 10:30:23.734379874 +0000 UTC m=+1013.443746331" watchObservedRunningTime="2025-12-02 10:30:23.734819987 +0000 UTC m=+1013.444186444" Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.766090 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mhr7r" podStartSLOduration=16.309353898 podStartE2EDuration="52.766064388s" podCreationTimestamp="2025-12-02 10:29:31 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.746306028 +0000 UTC m=+962.455672475" lastFinishedPulling="2025-12-02 10:30:09.203016518 +0000 UTC m=+998.912382965" observedRunningTime="2025-12-02 10:30:23.758822231 +0000 UTC m=+1013.468188698" watchObservedRunningTime="2025-12-02 10:30:23.766064388 +0000 UTC m=+1013.475430845" Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.788025 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k" podStartSLOduration=2.753808387 podStartE2EDuration="53.788008736s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.246149797 +0000 UTC m=+961.955516244" lastFinishedPulling="2025-12-02 10:30:23.280350136 +0000 UTC m=+1012.989716593" observedRunningTime="2025-12-02 10:30:23.785767075 +0000 UTC m=+1013.495133532" watchObservedRunningTime="2025-12-02 10:30:23.788008736 +0000 UTC m=+1013.497375173" Dec 02 10:30:23 crc kubenswrapper[4711]: I1202 10:30:23.859426 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" podStartSLOduration=48.375319812 podStartE2EDuration="52.859406791s" podCreationTimestamp="2025-12-02 10:29:31 +0000 UTC" firstStartedPulling="2025-12-02 10:30:04.485302824 +0000 UTC m=+994.194669291" lastFinishedPulling="2025-12-02 10:30:08.969389793 +0000 UTC m=+998.678756270" observedRunningTime="2025-12-02 10:30:23.841162304 +0000 UTC m=+1013.550528761" watchObservedRunningTime="2025-12-02 10:30:23.859406791 +0000 UTC m=+1013.568773238" Dec 02 10:30:24 crc kubenswrapper[4711]: I1202 10:30:24.710058 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n" event={"ID":"10c23c28-0e51-465d-ba7c-1becd6a7b5ee","Type":"ContainerStarted","Data":"8c334846d61f6ee7b942927f889926314506177d7b158f3f853a353a8b710740"} Dec 02 10:30:24 crc kubenswrapper[4711]: I1202 10:30:24.711852 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n" Dec 02 10:30:24 crc kubenswrapper[4711]: I1202 10:30:24.717439 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75" event={"ID":"26eb7b16-7210-459f-baac-e740acdb363e","Type":"ContainerStarted","Data":"12a9de618650b6809ad6bf4e406f0162a2864f8089249e0dc74934bf60272460"} Dec 02 10:30:24 crc kubenswrapper[4711]: I1202 10:30:24.718173 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75" Dec 02 10:30:24 crc kubenswrapper[4711]: I1202 10:30:24.733135 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n" podStartSLOduration=3.266772112 podStartE2EDuration="54.733112644s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:31.809328166 +0000 UTC m=+961.518694613" lastFinishedPulling="2025-12-02 10:30:23.275668658 +0000 UTC m=+1012.985035145" observedRunningTime="2025-12-02 10:30:24.731828689 +0000 UTC m=+1014.441195166" watchObservedRunningTime="2025-12-02 10:30:24.733112644 +0000 UTC m=+1014.442479101" Dec 02 10:30:24 crc kubenswrapper[4711]: I1202 10:30:24.754572 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75" podStartSLOduration=3.6086643880000002 podStartE2EDuration="54.754549697s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.129047969 +0000 UTC m=+961.838414416" lastFinishedPulling="2025-12-02 10:30:23.274933278 +0000 UTC m=+1012.984299725" observedRunningTime="2025-12-02 10:30:24.747554747 +0000 UTC m=+1014.456921184" watchObservedRunningTime="2025-12-02 10:30:24.754549697 +0000 UTC m=+1014.463916144" Dec 02 10:30:26 crc kubenswrapper[4711]: I1202 10:30:26.731879 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" event={"ID":"68951454-b246-49ab-b604-a62c48e0b2ea","Type":"ContainerStarted","Data":"2697d75179eccd2204565ab6964df39677dfa5a4d2371e40d6606496be8533d9"} Dec 02 10:30:26 crc kubenswrapper[4711]: I1202 10:30:26.733644 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" Dec 02 10:30:26 crc kubenswrapper[4711]: I1202 10:30:26.767517 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" podStartSLOduration=2.984483953 podStartE2EDuration="56.767494016s" podCreationTimestamp="2025-12-02 10:29:30 +0000 UTC" firstStartedPulling="2025-12-02 10:29:32.380162909 +0000 UTC m=+962.089529356" lastFinishedPulling="2025-12-02 10:30:26.163172922 +0000 UTC m=+1015.872539419" observedRunningTime="2025-12-02 10:30:26.747641315 +0000 UTC m=+1016.457007772" watchObservedRunningTime="2025-12-02 10:30:26.767494016 +0000 UTC m=+1016.476860463" Dec 02 10:30:27 crc kubenswrapper[4711]: I1202 10:30:27.160027 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:30:27 crc kubenswrapper[4711]: I1202 10:30:27.165268 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs" Dec 02 10:30:31 crc kubenswrapper[4711]: I1202 10:30:31.028692 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zsv2n" Dec 02 10:30:31 crc kubenswrapper[4711]: I1202 10:30:31.062620 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-n9x57" Dec 02 10:30:31 crc kubenswrapper[4711]: I1202 10:30:31.120787 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ktj75" Dec 02 10:30:31 crc kubenswrapper[4711]: I1202 10:30:31.315935 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cwr6k" Dec 02 10:30:31 crc kubenswrapper[4711]: I1202 10:30:31.436750 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xkq8d" Dec 02 10:30:31 crc kubenswrapper[4711]: I1202 10:30:31.493978 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wdt5k" Dec 02 10:30:31 crc kubenswrapper[4711]: I1202 10:30:31.522491 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-24pj6" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.056270 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9sx4m"] Dec 02 10:30:48 crc kubenswrapper[4711]: E1202 10:30:48.057140 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6ff6c5-8745-4f8d-83af-83ded31e5f85" containerName="collect-profiles" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.057161 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6ff6c5-8745-4f8d-83af-83ded31e5f85" containerName="collect-profiles" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.057330 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6ff6c5-8745-4f8d-83af-83ded31e5f85" containerName="collect-profiles" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.058266 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.068554 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-8wcch" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.071294 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.071439 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.071294 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.079862 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9sx4m"] Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.145330 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kg5nw"] Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.146594 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.149314 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.191716 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kg5nw"] Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.213726 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396a981c-fa06-431f-989e-425017056198-config\") pod \"dnsmasq-dns-675f4bcbfc-9sx4m\" (UID: \"396a981c-fa06-431f-989e-425017056198\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.213808 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng9mb\" (UniqueName: \"kubernetes.io/projected/396a981c-fa06-431f-989e-425017056198-kube-api-access-ng9mb\") pod \"dnsmasq-dns-675f4bcbfc-9sx4m\" (UID: \"396a981c-fa06-431f-989e-425017056198\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.315129 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396a981c-fa06-431f-989e-425017056198-config\") pod \"dnsmasq-dns-675f4bcbfc-9sx4m\" (UID: \"396a981c-fa06-431f-989e-425017056198\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.315176 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6429647a-07f2-4b13-b168-f2118ee78d7b-config\") pod \"dnsmasq-dns-78dd6ddcc-kg5nw\" (UID: \"6429647a-07f2-4b13-b168-f2118ee78d7b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.315194 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxx4r\" (UniqueName: \"kubernetes.io/projected/6429647a-07f2-4b13-b168-f2118ee78d7b-kube-api-access-zxx4r\") pod \"dnsmasq-dns-78dd6ddcc-kg5nw\" (UID: \"6429647a-07f2-4b13-b168-f2118ee78d7b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.315247 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng9mb\" (UniqueName: \"kubernetes.io/projected/396a981c-fa06-431f-989e-425017056198-kube-api-access-ng9mb\") pod \"dnsmasq-dns-675f4bcbfc-9sx4m\" (UID: \"396a981c-fa06-431f-989e-425017056198\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.315310 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6429647a-07f2-4b13-b168-f2118ee78d7b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-kg5nw\" (UID: \"6429647a-07f2-4b13-b168-f2118ee78d7b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.316115 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396a981c-fa06-431f-989e-425017056198-config\") pod \"dnsmasq-dns-675f4bcbfc-9sx4m\" (UID: \"396a981c-fa06-431f-989e-425017056198\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.344173 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng9mb\" (UniqueName: \"kubernetes.io/projected/396a981c-fa06-431f-989e-425017056198-kube-api-access-ng9mb\") pod \"dnsmasq-dns-675f4bcbfc-9sx4m\" (UID: \"396a981c-fa06-431f-989e-425017056198\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.379831 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.416911 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6429647a-07f2-4b13-b168-f2118ee78d7b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-kg5nw\" (UID: \"6429647a-07f2-4b13-b168-f2118ee78d7b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.417024 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6429647a-07f2-4b13-b168-f2118ee78d7b-config\") pod \"dnsmasq-dns-78dd6ddcc-kg5nw\" (UID: \"6429647a-07f2-4b13-b168-f2118ee78d7b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.417048 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxx4r\" (UniqueName: \"kubernetes.io/projected/6429647a-07f2-4b13-b168-f2118ee78d7b-kube-api-access-zxx4r\") pod \"dnsmasq-dns-78dd6ddcc-kg5nw\" (UID: \"6429647a-07f2-4b13-b168-f2118ee78d7b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.418195 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6429647a-07f2-4b13-b168-f2118ee78d7b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-kg5nw\" (UID: \"6429647a-07f2-4b13-b168-f2118ee78d7b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.418753 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6429647a-07f2-4b13-b168-f2118ee78d7b-config\") pod \"dnsmasq-dns-78dd6ddcc-kg5nw\" (UID: \"6429647a-07f2-4b13-b168-f2118ee78d7b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.435756 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxx4r\" (UniqueName: \"kubernetes.io/projected/6429647a-07f2-4b13-b168-f2118ee78d7b-kube-api-access-zxx4r\") pod \"dnsmasq-dns-78dd6ddcc-kg5nw\" (UID: \"6429647a-07f2-4b13-b168-f2118ee78d7b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.497277 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.848018 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9sx4m"] Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.922079 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" event={"ID":"396a981c-fa06-431f-989e-425017056198","Type":"ContainerStarted","Data":"9bd07254de1eb80e5f6e3e242d7d644ef74d96c19100c9523569427f9f46e7ea"} Dec 02 10:30:48 crc kubenswrapper[4711]: I1202 10:30:48.949659 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kg5nw"] Dec 02 10:30:48 crc kubenswrapper[4711]: W1202 10:30:48.954001 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6429647a_07f2_4b13_b168_f2118ee78d7b.slice/crio-273b3127216d5e9f2e7f7e80627e62e19c17d46b5e0dec529a2ef942ca955cb8 WatchSource:0}: Error finding container 273b3127216d5e9f2e7f7e80627e62e19c17d46b5e0dec529a2ef942ca955cb8: Status 404 returned error can't find the container with id 273b3127216d5e9f2e7f7e80627e62e19c17d46b5e0dec529a2ef942ca955cb8 Dec 02 10:30:49 crc kubenswrapper[4711]: I1202 10:30:49.931080 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" event={"ID":"6429647a-07f2-4b13-b168-f2118ee78d7b","Type":"ContainerStarted","Data":"273b3127216d5e9f2e7f7e80627e62e19c17d46b5e0dec529a2ef942ca955cb8"} Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.008864 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9sx4m"] Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.031541 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-knkz7"] Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.033655 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.037419 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-knkz7"] Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.171515 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7v7w\" (UniqueName: \"kubernetes.io/projected/40d28340-4369-46ef-9765-0c362b6fdb81-kube-api-access-r7v7w\") pod \"dnsmasq-dns-666b6646f7-knkz7\" (UID: \"40d28340-4369-46ef-9765-0c362b6fdb81\") " pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.171571 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40d28340-4369-46ef-9765-0c362b6fdb81-dns-svc\") pod \"dnsmasq-dns-666b6646f7-knkz7\" (UID: \"40d28340-4369-46ef-9765-0c362b6fdb81\") " pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.171614 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d28340-4369-46ef-9765-0c362b6fdb81-config\") pod \"dnsmasq-dns-666b6646f7-knkz7\" (UID: \"40d28340-4369-46ef-9765-0c362b6fdb81\") " pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.277602 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40d28340-4369-46ef-9765-0c362b6fdb81-dns-svc\") pod \"dnsmasq-dns-666b6646f7-knkz7\" (UID: \"40d28340-4369-46ef-9765-0c362b6fdb81\") " pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.277689 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d28340-4369-46ef-9765-0c362b6fdb81-config\") pod \"dnsmasq-dns-666b6646f7-knkz7\" (UID: \"40d28340-4369-46ef-9765-0c362b6fdb81\") " pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.277781 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7v7w\" (UniqueName: \"kubernetes.io/projected/40d28340-4369-46ef-9765-0c362b6fdb81-kube-api-access-r7v7w\") pod \"dnsmasq-dns-666b6646f7-knkz7\" (UID: \"40d28340-4369-46ef-9765-0c362b6fdb81\") " pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.278631 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40d28340-4369-46ef-9765-0c362b6fdb81-dns-svc\") pod \"dnsmasq-dns-666b6646f7-knkz7\" (UID: \"40d28340-4369-46ef-9765-0c362b6fdb81\") " pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.279079 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d28340-4369-46ef-9765-0c362b6fdb81-config\") pod \"dnsmasq-dns-666b6646f7-knkz7\" (UID: \"40d28340-4369-46ef-9765-0c362b6fdb81\") " pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.316716 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7v7w\" (UniqueName: \"kubernetes.io/projected/40d28340-4369-46ef-9765-0c362b6fdb81-kube-api-access-r7v7w\") pod \"dnsmasq-dns-666b6646f7-knkz7\" (UID: \"40d28340-4369-46ef-9765-0c362b6fdb81\") " pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.382483 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kg5nw"] Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.383637 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.422701 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gz2qs"] Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.423900 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.436577 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gz2qs"] Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.590582 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afd67557-1628-47e8-b608-720eca21e334-config\") pod \"dnsmasq-dns-57d769cc4f-gz2qs\" (UID: \"afd67557-1628-47e8-b608-720eca21e334\") " pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.590694 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afd67557-1628-47e8-b608-720eca21e334-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gz2qs\" (UID: \"afd67557-1628-47e8-b608-720eca21e334\") " pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.590738 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqjz2\" (UniqueName: \"kubernetes.io/projected/afd67557-1628-47e8-b608-720eca21e334-kube-api-access-kqjz2\") pod \"dnsmasq-dns-57d769cc4f-gz2qs\" (UID: \"afd67557-1628-47e8-b608-720eca21e334\") " pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.691414 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afd67557-1628-47e8-b608-720eca21e334-config\") pod \"dnsmasq-dns-57d769cc4f-gz2qs\" (UID: \"afd67557-1628-47e8-b608-720eca21e334\") " pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.691488 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afd67557-1628-47e8-b608-720eca21e334-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gz2qs\" (UID: \"afd67557-1628-47e8-b608-720eca21e334\") " pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.691538 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqjz2\" (UniqueName: \"kubernetes.io/projected/afd67557-1628-47e8-b608-720eca21e334-kube-api-access-kqjz2\") pod \"dnsmasq-dns-57d769cc4f-gz2qs\" (UID: \"afd67557-1628-47e8-b608-720eca21e334\") " pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.692337 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afd67557-1628-47e8-b608-720eca21e334-config\") pod \"dnsmasq-dns-57d769cc4f-gz2qs\" (UID: \"afd67557-1628-47e8-b608-720eca21e334\") " pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.694669 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afd67557-1628-47e8-b608-720eca21e334-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gz2qs\" (UID: \"afd67557-1628-47e8-b608-720eca21e334\") " pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.719553 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqjz2\" (UniqueName: \"kubernetes.io/projected/afd67557-1628-47e8-b608-720eca21e334-kube-api-access-kqjz2\") pod \"dnsmasq-dns-57d769cc4f-gz2qs\" (UID: \"afd67557-1628-47e8-b608-720eca21e334\") " pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.834315 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:30:51 crc kubenswrapper[4711]: I1202 10:30:51.932683 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-knkz7"] Dec 02 10:30:51 crc kubenswrapper[4711]: W1202 10:30:51.945388 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40d28340_4369_46ef_9765_0c362b6fdb81.slice/crio-f2a6cc558d6ba706de72ba40c7ab5b4435106a39fa27831de9d79f86ca1d1442 WatchSource:0}: Error finding container f2a6cc558d6ba706de72ba40c7ab5b4435106a39fa27831de9d79f86ca1d1442: Status 404 returned error can't find the container with id f2a6cc558d6ba706de72ba40c7ab5b4435106a39fa27831de9d79f86ca1d1442 Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.157243 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.158759 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.161973 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-msfjh" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.161942 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.168749 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.168803 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.168837 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.168749 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.169133 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.171304 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.239659 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gz2qs"] Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.300193 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.300256 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.300275 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.300301 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.300331 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.300464 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr6kw\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-kube-api-access-mr6kw\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.300545 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.300578 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.300600 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.300615 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.300644 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.411651 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.411702 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.411719 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.411737 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.411756 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.411795 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.411837 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.411857 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.411872 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.411902 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.411924 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr6kw\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-kube-api-access-mr6kw\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.412583 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.412855 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.413197 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.413533 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.418000 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.418598 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.419187 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.421619 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.421794 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.426826 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.427922 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr6kw\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-kube-api-access-mr6kw\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.437825 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.506257 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.550278 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.552276 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.555368 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.555680 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.555738 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.557627 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.557793 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.558065 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.558252 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4sctb" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.559334 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.716733 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.716777 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.716810 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.716853 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.716880 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h4gj\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-kube-api-access-2h4gj\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.716904 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.716927 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.716977 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdbcea35-5752-4be6-a7db-0f3aa362be58-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.717017 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.717043 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.717067 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdbcea35-5752-4be6-a7db-0f3aa362be58-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.818145 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.818411 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.818470 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdbcea35-5752-4be6-a7db-0f3aa362be58-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.818538 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.818556 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.818582 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.818626 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.818651 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h4gj\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-kube-api-access-2h4gj\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.818675 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.818701 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.818726 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdbcea35-5752-4be6-a7db-0f3aa362be58-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.818786 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.819742 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.819863 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.820079 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.820107 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.820311 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.822410 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdbcea35-5752-4be6-a7db-0f3aa362be58-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.823071 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdbcea35-5752-4be6-a7db-0f3aa362be58-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.825427 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.839509 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h4gj\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-kube-api-access-2h4gj\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.844904 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.852660 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.893793 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:30:52 crc kubenswrapper[4711]: I1202 10:30:52.959779 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" event={"ID":"40d28340-4369-46ef-9765-0c362b6fdb81","Type":"ContainerStarted","Data":"f2a6cc558d6ba706de72ba40c7ab5b4435106a39fa27831de9d79f86ca1d1442"} Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.814609 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.815995 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.819077 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.819128 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.819176 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.819375 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6rjzb" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.823167 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.827507 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.938993 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.939059 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.939113 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbwj\" (UniqueName: \"kubernetes.io/projected/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-kube-api-access-qfbwj\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.939147 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-config-data-default\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.939211 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.939250 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-kolla-config\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.939274 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:53 crc kubenswrapper[4711]: I1202 10:30:53.939304 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.040867 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.040942 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.041020 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfbwj\" (UniqueName: \"kubernetes.io/projected/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-kube-api-access-qfbwj\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.041058 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-config-data-default\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.041078 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.041118 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-kolla-config\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.041148 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.041185 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.041181 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.041499 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.042144 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-kolla-config\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.042172 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-config-data-default\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.043024 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.056626 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.056697 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.057542 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfbwj\" (UniqueName: \"kubernetes.io/projected/12dcc0fa-368d-4a71-99ee-fe27e2cd410a-kube-api-access-qfbwj\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.062449 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"12dcc0fa-368d-4a71-99ee-fe27e2cd410a\") " pod="openstack/openstack-galera-0" Dec 02 10:30:54 crc kubenswrapper[4711]: I1202 10:30:54.144254 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.267725 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.274195 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.278096 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-7tcsh" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.278359 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.278094 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.282236 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.287903 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.372820 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1de720bf-9fe1-40cb-888c-1868fbc89f63-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.372901 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1de720bf-9fe1-40cb-888c-1868fbc89f63-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.372921 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1de720bf-9fe1-40cb-888c-1868fbc89f63-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.372937 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de720bf-9fe1-40cb-888c-1868fbc89f63-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.372980 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.373012 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1de720bf-9fe1-40cb-888c-1868fbc89f63-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.373047 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c8w5\" (UniqueName: \"kubernetes.io/projected/1de720bf-9fe1-40cb-888c-1868fbc89f63-kube-api-access-4c8w5\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.373072 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de720bf-9fe1-40cb-888c-1868fbc89f63-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.474676 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1de720bf-9fe1-40cb-888c-1868fbc89f63-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.475009 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1de720bf-9fe1-40cb-888c-1868fbc89f63-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.475103 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de720bf-9fe1-40cb-888c-1868fbc89f63-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.475190 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.475284 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1de720bf-9fe1-40cb-888c-1868fbc89f63-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.475387 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c8w5\" (UniqueName: \"kubernetes.io/projected/1de720bf-9fe1-40cb-888c-1868fbc89f63-kube-api-access-4c8w5\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.475495 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de720bf-9fe1-40cb-888c-1868fbc89f63-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.475619 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1de720bf-9fe1-40cb-888c-1868fbc89f63-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.475744 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.476659 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1de720bf-9fe1-40cb-888c-1868fbc89f63-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.475425 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1de720bf-9fe1-40cb-888c-1868fbc89f63-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.477379 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de720bf-9fe1-40cb-888c-1868fbc89f63-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.484664 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1de720bf-9fe1-40cb-888c-1868fbc89f63-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.494720 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1de720bf-9fe1-40cb-888c-1868fbc89f63-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.495541 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c8w5\" (UniqueName: \"kubernetes.io/projected/1de720bf-9fe1-40cb-888c-1868fbc89f63-kube-api-access-4c8w5\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.495815 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de720bf-9fe1-40cb-888c-1868fbc89f63-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.509653 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1de720bf-9fe1-40cb-888c-1868fbc89f63\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.553254 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.554268 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.557495 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.557652 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-jkzzt" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.557588 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.587987 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.621480 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.679087 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2ad898b-6abc-49b9-8f12-5e2da28b6479-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.679152 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2ad898b-6abc-49b9-8f12-5e2da28b6479-kolla-config\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.679412 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ad898b-6abc-49b9-8f12-5e2da28b6479-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.679491 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2ad898b-6abc-49b9-8f12-5e2da28b6479-config-data\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.679538 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkwz4\" (UniqueName: \"kubernetes.io/projected/f2ad898b-6abc-49b9-8f12-5e2da28b6479-kube-api-access-bkwz4\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.780522 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ad898b-6abc-49b9-8f12-5e2da28b6479-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.780591 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2ad898b-6abc-49b9-8f12-5e2da28b6479-config-data\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.780625 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkwz4\" (UniqueName: \"kubernetes.io/projected/f2ad898b-6abc-49b9-8f12-5e2da28b6479-kube-api-access-bkwz4\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.780645 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2ad898b-6abc-49b9-8f12-5e2da28b6479-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.780679 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2ad898b-6abc-49b9-8f12-5e2da28b6479-kolla-config\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.781456 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2ad898b-6abc-49b9-8f12-5e2da28b6479-kolla-config\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.781456 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2ad898b-6abc-49b9-8f12-5e2da28b6479-config-data\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.785491 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2ad898b-6abc-49b9-8f12-5e2da28b6479-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.795923 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ad898b-6abc-49b9-8f12-5e2da28b6479-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.809497 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkwz4\" (UniqueName: \"kubernetes.io/projected/f2ad898b-6abc-49b9-8f12-5e2da28b6479-kube-api-access-bkwz4\") pod \"memcached-0\" (UID: \"f2ad898b-6abc-49b9-8f12-5e2da28b6479\") " pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: I1202 10:30:55.872320 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 02 10:30:55 crc kubenswrapper[4711]: W1202 10:30:55.989330 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafd67557_1628_47e8_b608_720eca21e334.slice/crio-676c357d32799df2df8f85473a0bc59f1e9273e80635e740e49965d09c4513e1 WatchSource:0}: Error finding container 676c357d32799df2df8f85473a0bc59f1e9273e80635e740e49965d09c4513e1: Status 404 returned error can't find the container with id 676c357d32799df2df8f85473a0bc59f1e9273e80635e740e49965d09c4513e1 Dec 02 10:30:56 crc kubenswrapper[4711]: I1202 10:30:56.987798 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" event={"ID":"afd67557-1628-47e8-b608-720eca21e334","Type":"ContainerStarted","Data":"676c357d32799df2df8f85473a0bc59f1e9273e80635e740e49965d09c4513e1"} Dec 02 10:30:57 crc kubenswrapper[4711]: I1202 10:30:57.621668 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:30:57 crc kubenswrapper[4711]: I1202 10:30:57.622619 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 10:30:57 crc kubenswrapper[4711]: I1202 10:30:57.624462 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-qsqhc" Dec 02 10:30:57 crc kubenswrapper[4711]: I1202 10:30:57.668663 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:30:57 crc kubenswrapper[4711]: I1202 10:30:57.710763 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjvm5\" (UniqueName: \"kubernetes.io/projected/d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed-kube-api-access-wjvm5\") pod \"kube-state-metrics-0\" (UID: \"d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed\") " pod="openstack/kube-state-metrics-0" Dec 02 10:30:57 crc kubenswrapper[4711]: I1202 10:30:57.811912 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjvm5\" (UniqueName: \"kubernetes.io/projected/d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed-kube-api-access-wjvm5\") pod \"kube-state-metrics-0\" (UID: \"d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed\") " pod="openstack/kube-state-metrics-0" Dec 02 10:30:57 crc kubenswrapper[4711]: I1202 10:30:57.832076 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjvm5\" (UniqueName: \"kubernetes.io/projected/d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed-kube-api-access-wjvm5\") pod \"kube-state-metrics-0\" (UID: \"d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed\") " pod="openstack/kube-state-metrics-0" Dec 02 10:30:57 crc kubenswrapper[4711]: I1202 10:30:57.940171 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 10:30:59 crc kubenswrapper[4711]: I1202 10:30:59.620900 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 10:30:59 crc kubenswrapper[4711]: I1202 10:30:59.706234 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.614220 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.615680 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.617511 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.617787 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.618200 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.618220 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-ngp78" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.621184 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.622408 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.680485 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.680537 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fff7494-ee8a-4c45-87de-00444f64be54-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.680619 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fff7494-ee8a-4c45-87de-00444f64be54-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.680652 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvznr\" (UniqueName: \"kubernetes.io/projected/7fff7494-ee8a-4c45-87de-00444f64be54-kube-api-access-fvznr\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.680720 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7fff7494-ee8a-4c45-87de-00444f64be54-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.680761 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fff7494-ee8a-4c45-87de-00444f64be54-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.680783 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fff7494-ee8a-4c45-87de-00444f64be54-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.680828 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fff7494-ee8a-4c45-87de-00444f64be54-config\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.782528 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7fff7494-ee8a-4c45-87de-00444f64be54-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.782594 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fff7494-ee8a-4c45-87de-00444f64be54-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.782613 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fff7494-ee8a-4c45-87de-00444f64be54-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.782702 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fff7494-ee8a-4c45-87de-00444f64be54-config\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.782758 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.782776 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fff7494-ee8a-4c45-87de-00444f64be54-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.782800 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fff7494-ee8a-4c45-87de-00444f64be54-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.782824 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvznr\" (UniqueName: \"kubernetes.io/projected/7fff7494-ee8a-4c45-87de-00444f64be54-kube-api-access-fvznr\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.782988 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7fff7494-ee8a-4c45-87de-00444f64be54-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.783266 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.783874 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fff7494-ee8a-4c45-87de-00444f64be54-config\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.784206 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fff7494-ee8a-4c45-87de-00444f64be54-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.811894 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fff7494-ee8a-4c45-87de-00444f64be54-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.818914 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvznr\" (UniqueName: \"kubernetes.io/projected/7fff7494-ee8a-4c45-87de-00444f64be54-kube-api-access-fvznr\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.820428 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fff7494-ee8a-4c45-87de-00444f64be54-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.827897 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.828657 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fff7494-ee8a-4c45-87de-00444f64be54-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7fff7494-ee8a-4c45-87de-00444f64be54\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:01 crc kubenswrapper[4711]: I1202 10:31:01.948447 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.179966 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q57lb"] Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.180892 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.183848 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-cwxmg" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.183885 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.186846 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.203387 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q57lb"] Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.211136 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-lxtbd"] Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.213167 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.233891 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lxtbd"] Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289063 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ce53b33-b78a-446d-b345-c8d918209ddf-scripts\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289112 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstr7\" (UniqueName: \"kubernetes.io/projected/82b00f57-beb4-43ad-a1c5-cc9790bb167e-kube-api-access-bstr7\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289132 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce53b33-b78a-446d-b345-c8d918209ddf-ovn-controller-tls-certs\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289153 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/82b00f57-beb4-43ad-a1c5-cc9790bb167e-var-lib\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289170 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ce53b33-b78a-446d-b345-c8d918209ddf-var-log-ovn\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289191 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/82b00f57-beb4-43ad-a1c5-cc9790bb167e-etc-ovs\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289212 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82b00f57-beb4-43ad-a1c5-cc9790bb167e-var-run\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289245 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ce53b33-b78a-446d-b345-c8d918209ddf-var-run-ovn\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289265 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce53b33-b78a-446d-b345-c8d918209ddf-combined-ca-bundle\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289316 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/82b00f57-beb4-43ad-a1c5-cc9790bb167e-var-log\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289358 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ce53b33-b78a-446d-b345-c8d918209ddf-var-run\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289386 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ll4n\" (UniqueName: \"kubernetes.io/projected/7ce53b33-b78a-446d-b345-c8d918209ddf-kube-api-access-4ll4n\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.289406 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82b00f57-beb4-43ad-a1c5-cc9790bb167e-scripts\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.390752 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/82b00f57-beb4-43ad-a1c5-cc9790bb167e-var-log\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.390833 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ce53b33-b78a-446d-b345-c8d918209ddf-var-run\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.390876 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ll4n\" (UniqueName: \"kubernetes.io/projected/7ce53b33-b78a-446d-b345-c8d918209ddf-kube-api-access-4ll4n\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.390901 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82b00f57-beb4-43ad-a1c5-cc9790bb167e-scripts\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.390934 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ce53b33-b78a-446d-b345-c8d918209ddf-scripts\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.390982 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bstr7\" (UniqueName: \"kubernetes.io/projected/82b00f57-beb4-43ad-a1c5-cc9790bb167e-kube-api-access-bstr7\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.391004 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce53b33-b78a-446d-b345-c8d918209ddf-ovn-controller-tls-certs\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.391035 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/82b00f57-beb4-43ad-a1c5-cc9790bb167e-var-lib\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.391055 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ce53b33-b78a-446d-b345-c8d918209ddf-var-log-ovn\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.391085 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/82b00f57-beb4-43ad-a1c5-cc9790bb167e-etc-ovs\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.391112 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82b00f57-beb4-43ad-a1c5-cc9790bb167e-var-run\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.391154 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ce53b33-b78a-446d-b345-c8d918209ddf-var-run-ovn\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.391183 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce53b33-b78a-446d-b345-c8d918209ddf-combined-ca-bundle\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.392790 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/82b00f57-beb4-43ad-a1c5-cc9790bb167e-var-log\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.392986 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ce53b33-b78a-446d-b345-c8d918209ddf-var-run\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.394296 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82b00f57-beb4-43ad-a1c5-cc9790bb167e-var-run\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.394330 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ce53b33-b78a-446d-b345-c8d918209ddf-var-log-ovn\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.394758 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ce53b33-b78a-446d-b345-c8d918209ddf-var-run-ovn\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.394889 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/82b00f57-beb4-43ad-a1c5-cc9790bb167e-etc-ovs\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.395017 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/82b00f57-beb4-43ad-a1c5-cc9790bb167e-var-lib\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.395834 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82b00f57-beb4-43ad-a1c5-cc9790bb167e-scripts\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.397130 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ce53b33-b78a-446d-b345-c8d918209ddf-scripts\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.397854 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce53b33-b78a-446d-b345-c8d918209ddf-combined-ca-bundle\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.398436 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce53b33-b78a-446d-b345-c8d918209ddf-ovn-controller-tls-certs\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.414480 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ll4n\" (UniqueName: \"kubernetes.io/projected/7ce53b33-b78a-446d-b345-c8d918209ddf-kube-api-access-4ll4n\") pod \"ovn-controller-q57lb\" (UID: \"7ce53b33-b78a-446d-b345-c8d918209ddf\") " pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.419811 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bstr7\" (UniqueName: \"kubernetes.io/projected/82b00f57-beb4-43ad-a1c5-cc9790bb167e-kube-api-access-bstr7\") pod \"ovn-controller-ovs-lxtbd\" (UID: \"82b00f57-beb4-43ad-a1c5-cc9790bb167e\") " pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.516434 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q57lb" Dec 02 10:31:02 crc kubenswrapper[4711]: I1202 10:31:02.530829 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.023748 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.025561 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.027772 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.027807 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.027974 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.029389 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-ggz6x" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.038236 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.145063 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99032f62-533c-4fa2-887c-41a25a505906-config\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.145118 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/99032f62-533c-4fa2-887c-41a25a505906-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.145167 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5pjr\" (UniqueName: \"kubernetes.io/projected/99032f62-533c-4fa2-887c-41a25a505906-kube-api-access-m5pjr\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.145225 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.145285 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/99032f62-533c-4fa2-887c-41a25a505906-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.145317 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99032f62-533c-4fa2-887c-41a25a505906-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.145353 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99032f62-533c-4fa2-887c-41a25a505906-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.145373 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/99032f62-533c-4fa2-887c-41a25a505906-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.246239 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99032f62-533c-4fa2-887c-41a25a505906-config\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.246303 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/99032f62-533c-4fa2-887c-41a25a505906-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.246339 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5pjr\" (UniqueName: \"kubernetes.io/projected/99032f62-533c-4fa2-887c-41a25a505906-kube-api-access-m5pjr\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.246382 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.246429 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/99032f62-533c-4fa2-887c-41a25a505906-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.246459 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99032f62-533c-4fa2-887c-41a25a505906-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.246782 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.247036 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/99032f62-533c-4fa2-887c-41a25a505906-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.247901 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99032f62-533c-4fa2-887c-41a25a505906-config\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.247998 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/99032f62-533c-4fa2-887c-41a25a505906-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.248041 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99032f62-533c-4fa2-887c-41a25a505906-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.249341 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99032f62-533c-4fa2-887c-41a25a505906-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.251758 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/99032f62-533c-4fa2-887c-41a25a505906-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.257069 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/99032f62-533c-4fa2-887c-41a25a505906-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.260084 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99032f62-533c-4fa2-887c-41a25a505906-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.281199 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.288845 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5pjr\" (UniqueName: \"kubernetes.io/projected/99032f62-533c-4fa2-887c-41a25a505906-kube-api-access-m5pjr\") pod \"ovsdbserver-sb-0\" (UID: \"99032f62-533c-4fa2-887c-41a25a505906\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.362325 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:05 crc kubenswrapper[4711]: I1202 10:31:05.700617 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 02 10:31:06 crc kubenswrapper[4711]: I1202 10:31:06.073572 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1de720bf-9fe1-40cb-888c-1868fbc89f63","Type":"ContainerStarted","Data":"434a019752eb57be791250e2ee6c715197b871afdad4fd197c0b000e7e783c66"} Dec 02 10:31:06 crc kubenswrapper[4711]: I1202 10:31:06.075877 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29","Type":"ContainerStarted","Data":"516840f2de76f0c26d4d8ec8505bd564b9d6063845c110a5ebd53e69e166f07c"} Dec 02 10:31:07 crc kubenswrapper[4711]: I1202 10:31:07.096639 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f2ad898b-6abc-49b9-8f12-5e2da28b6479","Type":"ContainerStarted","Data":"ca961c956154b65f6f7424720fedff08a3cbcafd8c02930599de1a71f1ec0f4a"} Dec 02 10:31:07 crc kubenswrapper[4711]: E1202 10:31:07.115893 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 10:31:07 crc kubenswrapper[4711]: E1202 10:31:07.116227 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ng9mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-9sx4m_openstack(396a981c-fa06-431f-989e-425017056198): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:31:07 crc kubenswrapper[4711]: E1202 10:31:07.117868 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" podUID="396a981c-fa06-431f-989e-425017056198" Dec 02 10:31:07 crc kubenswrapper[4711]: E1202 10:31:07.230898 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 10:31:07 crc kubenswrapper[4711]: E1202 10:31:07.231461 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxx4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-kg5nw_openstack(6429647a-07f2-4b13-b168-f2118ee78d7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:31:07 crc kubenswrapper[4711]: E1202 10:31:07.232542 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" podUID="6429647a-07f2-4b13-b168-f2118ee78d7b" Dec 02 10:31:07 crc kubenswrapper[4711]: I1202 10:31:07.261394 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:31:07 crc kubenswrapper[4711]: I1202 10:31:07.354485 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 10:31:07 crc kubenswrapper[4711]: I1202 10:31:07.361226 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:31:07 crc kubenswrapper[4711]: I1202 10:31:07.430324 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q57lb"] Dec 02 10:31:07 crc kubenswrapper[4711]: W1202 10:31:07.456157 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5f196a7_6e9f_4574_8dda_07ee9b4fd4ed.slice/crio-1bfe1049f953291c544614d722f12c36129b8201579f3f3b2d3227ff6563e3e6 WatchSource:0}: Error finding container 1bfe1049f953291c544614d722f12c36129b8201579f3f3b2d3227ff6563e3e6: Status 404 returned error can't find the container with id 1bfe1049f953291c544614d722f12c36129b8201579f3f3b2d3227ff6563e3e6 Dec 02 10:31:07 crc kubenswrapper[4711]: W1202 10:31:07.458037 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12dcc0fa_368d_4a71_99ee_fe27e2cd410a.slice/crio-73a7a4cbbaf0ccd7e901209e98ef9984c2f630759bdb04c2ad1f60d1e7368114 WatchSource:0}: Error finding container 73a7a4cbbaf0ccd7e901209e98ef9984c2f630759bdb04c2ad1f60d1e7368114: Status 404 returned error can't find the container with id 73a7a4cbbaf0ccd7e901209e98ef9984c2f630759bdb04c2ad1f60d1e7368114 Dec 02 10:31:07 crc kubenswrapper[4711]: W1202 10:31:07.604376 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99032f62_533c_4fa2_887c_41a25a505906.slice/crio-b9e6f5ce1949ce0684f3fa364e770e87db61f2d0e45c0dca4c2631b27fa42220 WatchSource:0}: Error finding container b9e6f5ce1949ce0684f3fa364e770e87db61f2d0e45c0dca4c2631b27fa42220: Status 404 returned error can't find the container with id b9e6f5ce1949ce0684f3fa364e770e87db61f2d0e45c0dca4c2631b27fa42220 Dec 02 10:31:07 crc kubenswrapper[4711]: I1202 10:31:07.604753 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.121765 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cdbcea35-5752-4be6-a7db-0f3aa362be58","Type":"ContainerStarted","Data":"4dc5b5d5cc462c13394e27631c065f2e1a47448fb395e66251601a9e242e91d0"} Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.124430 4711 generic.go:334] "Generic (PLEG): container finished" podID="afd67557-1628-47e8-b608-720eca21e334" containerID="49204fa08d892d62b5043ab169715143669a4036facb1f5abc7d884d5fe8999b" exitCode=0 Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.124861 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" event={"ID":"afd67557-1628-47e8-b608-720eca21e334","Type":"ContainerDied","Data":"49204fa08d892d62b5043ab169715143669a4036facb1f5abc7d884d5fe8999b"} Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.127552 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"12dcc0fa-368d-4a71-99ee-fe27e2cd410a","Type":"ContainerStarted","Data":"73a7a4cbbaf0ccd7e901209e98ef9984c2f630759bdb04c2ad1f60d1e7368114"} Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.129595 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"99032f62-533c-4fa2-887c-41a25a505906","Type":"ContainerStarted","Data":"b9e6f5ce1949ce0684f3fa364e770e87db61f2d0e45c0dca4c2631b27fa42220"} Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.131809 4711 generic.go:334] "Generic (PLEG): container finished" podID="40d28340-4369-46ef-9765-0c362b6fdb81" containerID="1b71fcffdab278cb45fc8a692b7028786e1d0591779ffb95e789597695cef64d" exitCode=0 Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.131991 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" event={"ID":"40d28340-4369-46ef-9765-0c362b6fdb81","Type":"ContainerDied","Data":"1b71fcffdab278cb45fc8a692b7028786e1d0591779ffb95e789597695cef64d"} Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.135798 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed","Type":"ContainerStarted","Data":"1bfe1049f953291c544614d722f12c36129b8201579f3f3b2d3227ff6563e3e6"} Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.137243 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q57lb" event={"ID":"7ce53b33-b78a-446d-b345-c8d918209ddf","Type":"ContainerStarted","Data":"f4393238cc4ddb602e33441b07d7f72003373edf7e8902587e7d84d863c606a9"} Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.261420 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lxtbd"] Dec 02 10:31:08 crc kubenswrapper[4711]: W1202 10:31:08.295762 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82b00f57_beb4_43ad_a1c5_cc9790bb167e.slice/crio-748e8108a6d67b719bf92a56dbb05ec091a73e68434c1dee0e350e878c18e470 WatchSource:0}: Error finding container 748e8108a6d67b719bf92a56dbb05ec091a73e68434c1dee0e350e878c18e470: Status 404 returned error can't find the container with id 748e8108a6d67b719bf92a56dbb05ec091a73e68434c1dee0e350e878c18e470 Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.618607 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.683779 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.692925 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:31:08 crc kubenswrapper[4711]: W1202 10:31:08.695037 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fff7494_ee8a_4c45_87de_00444f64be54.slice/crio-0c39054fdf71926fcd499d0e940235b08fb9509118a972bc49c1ee4cbae85299 WatchSource:0}: Error finding container 0c39054fdf71926fcd499d0e940235b08fb9509118a972bc49c1ee4cbae85299: Status 404 returned error can't find the container with id 0c39054fdf71926fcd499d0e940235b08fb9509118a972bc49c1ee4cbae85299 Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.736665 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxx4r\" (UniqueName: \"kubernetes.io/projected/6429647a-07f2-4b13-b168-f2118ee78d7b-kube-api-access-zxx4r\") pod \"6429647a-07f2-4b13-b168-f2118ee78d7b\" (UID: \"6429647a-07f2-4b13-b168-f2118ee78d7b\") " Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.736768 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6429647a-07f2-4b13-b168-f2118ee78d7b-dns-svc\") pod \"6429647a-07f2-4b13-b168-f2118ee78d7b\" (UID: \"6429647a-07f2-4b13-b168-f2118ee78d7b\") " Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.736896 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng9mb\" (UniqueName: \"kubernetes.io/projected/396a981c-fa06-431f-989e-425017056198-kube-api-access-ng9mb\") pod \"396a981c-fa06-431f-989e-425017056198\" (UID: \"396a981c-fa06-431f-989e-425017056198\") " Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.736944 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396a981c-fa06-431f-989e-425017056198-config\") pod \"396a981c-fa06-431f-989e-425017056198\" (UID: \"396a981c-fa06-431f-989e-425017056198\") " Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.736994 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6429647a-07f2-4b13-b168-f2118ee78d7b-config\") pod \"6429647a-07f2-4b13-b168-f2118ee78d7b\" (UID: \"6429647a-07f2-4b13-b168-f2118ee78d7b\") " Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.738038 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6429647a-07f2-4b13-b168-f2118ee78d7b-config" (OuterVolumeSpecName: "config") pod "6429647a-07f2-4b13-b168-f2118ee78d7b" (UID: "6429647a-07f2-4b13-b168-f2118ee78d7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.742889 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6429647a-07f2-4b13-b168-f2118ee78d7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6429647a-07f2-4b13-b168-f2118ee78d7b" (UID: "6429647a-07f2-4b13-b168-f2118ee78d7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.743426 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/396a981c-fa06-431f-989e-425017056198-kube-api-access-ng9mb" (OuterVolumeSpecName: "kube-api-access-ng9mb") pod "396a981c-fa06-431f-989e-425017056198" (UID: "396a981c-fa06-431f-989e-425017056198"). InnerVolumeSpecName "kube-api-access-ng9mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.743633 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6429647a-07f2-4b13-b168-f2118ee78d7b-kube-api-access-zxx4r" (OuterVolumeSpecName: "kube-api-access-zxx4r") pod "6429647a-07f2-4b13-b168-f2118ee78d7b" (UID: "6429647a-07f2-4b13-b168-f2118ee78d7b"). InnerVolumeSpecName "kube-api-access-zxx4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.753312 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/396a981c-fa06-431f-989e-425017056198-config" (OuterVolumeSpecName: "config") pod "396a981c-fa06-431f-989e-425017056198" (UID: "396a981c-fa06-431f-989e-425017056198"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.839633 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxx4r\" (UniqueName: \"kubernetes.io/projected/6429647a-07f2-4b13-b168-f2118ee78d7b-kube-api-access-zxx4r\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.839672 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6429647a-07f2-4b13-b168-f2118ee78d7b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.839686 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng9mb\" (UniqueName: \"kubernetes.io/projected/396a981c-fa06-431f-989e-425017056198-kube-api-access-ng9mb\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.839700 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396a981c-fa06-431f-989e-425017056198-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:08 crc kubenswrapper[4711]: I1202 10:31:08.839715 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6429647a-07f2-4b13-b168-f2118ee78d7b-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:09 crc kubenswrapper[4711]: I1202 10:31:09.146862 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7fff7494-ee8a-4c45-87de-00444f64be54","Type":"ContainerStarted","Data":"0c39054fdf71926fcd499d0e940235b08fb9509118a972bc49c1ee4cbae85299"} Dec 02 10:31:09 crc kubenswrapper[4711]: I1202 10:31:09.149532 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lxtbd" event={"ID":"82b00f57-beb4-43ad-a1c5-cc9790bb167e","Type":"ContainerStarted","Data":"748e8108a6d67b719bf92a56dbb05ec091a73e68434c1dee0e350e878c18e470"} Dec 02 10:31:09 crc kubenswrapper[4711]: I1202 10:31:09.150486 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" event={"ID":"396a981c-fa06-431f-989e-425017056198","Type":"ContainerDied","Data":"9bd07254de1eb80e5f6e3e242d7d644ef74d96c19100c9523569427f9f46e7ea"} Dec 02 10:31:09 crc kubenswrapper[4711]: I1202 10:31:09.150546 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9sx4m" Dec 02 10:31:09 crc kubenswrapper[4711]: I1202 10:31:09.157135 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" event={"ID":"6429647a-07f2-4b13-b168-f2118ee78d7b","Type":"ContainerDied","Data":"273b3127216d5e9f2e7f7e80627e62e19c17d46b5e0dec529a2ef942ca955cb8"} Dec 02 10:31:09 crc kubenswrapper[4711]: I1202 10:31:09.157209 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-kg5nw" Dec 02 10:31:09 crc kubenswrapper[4711]: I1202 10:31:09.216001 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9sx4m"] Dec 02 10:31:09 crc kubenswrapper[4711]: I1202 10:31:09.228657 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9sx4m"] Dec 02 10:31:09 crc kubenswrapper[4711]: I1202 10:31:09.240208 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kg5nw"] Dec 02 10:31:09 crc kubenswrapper[4711]: I1202 10:31:09.244716 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kg5nw"] Dec 02 10:31:11 crc kubenswrapper[4711]: I1202 10:31:11.108028 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="396a981c-fa06-431f-989e-425017056198" path="/var/lib/kubelet/pods/396a981c-fa06-431f-989e-425017056198/volumes" Dec 02 10:31:11 crc kubenswrapper[4711]: I1202 10:31:11.109258 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6429647a-07f2-4b13-b168-f2118ee78d7b" path="/var/lib/kubelet/pods/6429647a-07f2-4b13-b168-f2118ee78d7b/volumes" Dec 02 10:31:11 crc kubenswrapper[4711]: I1202 10:31:11.174263 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" event={"ID":"afd67557-1628-47e8-b608-720eca21e334","Type":"ContainerStarted","Data":"fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154"} Dec 02 10:31:11 crc kubenswrapper[4711]: I1202 10:31:11.175040 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:31:11 crc kubenswrapper[4711]: I1202 10:31:11.181648 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" event={"ID":"40d28340-4369-46ef-9765-0c362b6fdb81","Type":"ContainerStarted","Data":"34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395"} Dec 02 10:31:11 crc kubenswrapper[4711]: I1202 10:31:11.182136 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:31:11 crc kubenswrapper[4711]: I1202 10:31:11.254287 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" podStartSLOduration=9.094454449 podStartE2EDuration="20.254238858s" podCreationTimestamp="2025-12-02 10:30:51 +0000 UTC" firstStartedPulling="2025-12-02 10:30:55.993278289 +0000 UTC m=+1045.702644736" lastFinishedPulling="2025-12-02 10:31:07.153062698 +0000 UTC m=+1056.862429145" observedRunningTime="2025-12-02 10:31:11.247349379 +0000 UTC m=+1060.956715826" watchObservedRunningTime="2025-12-02 10:31:11.254238858 +0000 UTC m=+1060.963605305" Dec 02 10:31:16 crc kubenswrapper[4711]: I1202 10:31:16.222401 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1de720bf-9fe1-40cb-888c-1868fbc89f63","Type":"ContainerStarted","Data":"028bbb30b123df3b804102065db81d2141165f00d82fea6d79c16e881a91b553"} Dec 02 10:31:16 crc kubenswrapper[4711]: I1202 10:31:16.251601 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" podStartSLOduration=10.084717514 podStartE2EDuration="25.251577377s" podCreationTimestamp="2025-12-02 10:30:51 +0000 UTC" firstStartedPulling="2025-12-02 10:30:51.950332549 +0000 UTC m=+1041.659698996" lastFinishedPulling="2025-12-02 10:31:07.117192422 +0000 UTC m=+1056.826558859" observedRunningTime="2025-12-02 10:31:11.268530201 +0000 UTC m=+1060.977896658" watchObservedRunningTime="2025-12-02 10:31:16.251577377 +0000 UTC m=+1065.960943824" Dec 02 10:31:16 crc kubenswrapper[4711]: I1202 10:31:16.385827 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:31:16 crc kubenswrapper[4711]: I1202 10:31:16.837218 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:31:16 crc kubenswrapper[4711]: I1202 10:31:16.907770 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-knkz7"] Dec 02 10:31:17 crc kubenswrapper[4711]: I1202 10:31:17.231065 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lxtbd" event={"ID":"82b00f57-beb4-43ad-a1c5-cc9790bb167e","Type":"ContainerStarted","Data":"4a06e598337b189eabe7c2bc0e7b17914dfe868c2ac84b6afed2844a167a78a5"} Dec 02 10:31:17 crc kubenswrapper[4711]: I1202 10:31:17.232348 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f2ad898b-6abc-49b9-8f12-5e2da28b6479","Type":"ContainerStarted","Data":"9c8c1e4e126598e57808cf8f8ca6fc4b2cbb91dbecac7344be97fb10330986e4"} Dec 02 10:31:17 crc kubenswrapper[4711]: I1202 10:31:17.233386 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" podUID="40d28340-4369-46ef-9765-0c362b6fdb81" containerName="dnsmasq-dns" containerID="cri-o://34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395" gracePeriod=10 Dec 02 10:31:17 crc kubenswrapper[4711]: I1202 10:31:17.279416 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.831841055 podStartE2EDuration="22.279386924s" podCreationTimestamp="2025-12-02 10:30:55 +0000 UTC" firstStartedPulling="2025-12-02 10:31:06.763128528 +0000 UTC m=+1056.472495005" lastFinishedPulling="2025-12-02 10:31:14.210674427 +0000 UTC m=+1063.920040874" observedRunningTime="2025-12-02 10:31:17.271087745 +0000 UTC m=+1066.980454192" watchObservedRunningTime="2025-12-02 10:31:17.279386924 +0000 UTC m=+1066.988753371" Dec 02 10:31:17 crc kubenswrapper[4711]: I1202 10:31:17.899847 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:31:17 crc kubenswrapper[4711]: I1202 10:31:17.999715 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40d28340-4369-46ef-9765-0c362b6fdb81-dns-svc\") pod \"40d28340-4369-46ef-9765-0c362b6fdb81\" (UID: \"40d28340-4369-46ef-9765-0c362b6fdb81\") " Dec 02 10:31:17 crc kubenswrapper[4711]: I1202 10:31:17.999841 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d28340-4369-46ef-9765-0c362b6fdb81-config\") pod \"40d28340-4369-46ef-9765-0c362b6fdb81\" (UID: \"40d28340-4369-46ef-9765-0c362b6fdb81\") " Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:17.999864 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7v7w\" (UniqueName: \"kubernetes.io/projected/40d28340-4369-46ef-9765-0c362b6fdb81-kube-api-access-r7v7w\") pod \"40d28340-4369-46ef-9765-0c362b6fdb81\" (UID: \"40d28340-4369-46ef-9765-0c362b6fdb81\") " Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.019387 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d28340-4369-46ef-9765-0c362b6fdb81-kube-api-access-r7v7w" (OuterVolumeSpecName: "kube-api-access-r7v7w") pod "40d28340-4369-46ef-9765-0c362b6fdb81" (UID: "40d28340-4369-46ef-9765-0c362b6fdb81"). InnerVolumeSpecName "kube-api-access-r7v7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.055059 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40d28340-4369-46ef-9765-0c362b6fdb81-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40d28340-4369-46ef-9765-0c362b6fdb81" (UID: "40d28340-4369-46ef-9765-0c362b6fdb81"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.058401 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40d28340-4369-46ef-9765-0c362b6fdb81-config" (OuterVolumeSpecName: "config") pod "40d28340-4369-46ef-9765-0c362b6fdb81" (UID: "40d28340-4369-46ef-9765-0c362b6fdb81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.101130 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d28340-4369-46ef-9765-0c362b6fdb81-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.101156 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7v7w\" (UniqueName: \"kubernetes.io/projected/40d28340-4369-46ef-9765-0c362b6fdb81-kube-api-access-r7v7w\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.101166 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40d28340-4369-46ef-9765-0c362b6fdb81-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.239878 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"12dcc0fa-368d-4a71-99ee-fe27e2cd410a","Type":"ContainerStarted","Data":"072d38d88882a436a4280b46e6a8d6f2c68614957fbf57df2c2de463757c1c8f"} Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.243131 4711 generic.go:334] "Generic (PLEG): container finished" podID="40d28340-4369-46ef-9765-0c362b6fdb81" containerID="34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395" exitCode=0 Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.243183 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" event={"ID":"40d28340-4369-46ef-9765-0c362b6fdb81","Type":"ContainerDied","Data":"34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395"} Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.243200 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" event={"ID":"40d28340-4369-46ef-9765-0c362b6fdb81","Type":"ContainerDied","Data":"f2a6cc558d6ba706de72ba40c7ab5b4435106a39fa27831de9d79f86ca1d1442"} Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.243238 4711 scope.go:117] "RemoveContainer" containerID="34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.243343 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-knkz7" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.248828 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cdbcea35-5752-4be6-a7db-0f3aa362be58","Type":"ContainerStarted","Data":"8cff31e19396fc956bdae49bc1df5c315bf83e71499fff0a4ffd8e4bd7158fb7"} Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.252017 4711 generic.go:334] "Generic (PLEG): container finished" podID="82b00f57-beb4-43ad-a1c5-cc9790bb167e" containerID="4a06e598337b189eabe7c2bc0e7b17914dfe868c2ac84b6afed2844a167a78a5" exitCode=0 Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.252120 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lxtbd" event={"ID":"82b00f57-beb4-43ad-a1c5-cc9790bb167e","Type":"ContainerDied","Data":"4a06e598337b189eabe7c2bc0e7b17914dfe868c2ac84b6afed2844a167a78a5"} Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.256602 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29","Type":"ContainerStarted","Data":"59b11b593094b689e9ee147410d78c22e9f6fc07d27944727720d80cb5f1e8c4"} Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.257177 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.309230 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-knkz7"] Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.317312 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-knkz7"] Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.588797 4711 scope.go:117] "RemoveContainer" containerID="1b71fcffdab278cb45fc8a692b7028786e1d0591779ffb95e789597695cef64d" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.982843 4711 scope.go:117] "RemoveContainer" containerID="34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395" Dec 02 10:31:18 crc kubenswrapper[4711]: E1202 10:31:18.983287 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395\": container with ID starting with 34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395 not found: ID does not exist" containerID="34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.983332 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395"} err="failed to get container status \"34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395\": rpc error: code = NotFound desc = could not find container \"34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395\": container with ID starting with 34d5137fb4c0e1a940818ce6a08fb5a6be8cea2fff61340201410b1edc661395 not found: ID does not exist" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.983354 4711 scope.go:117] "RemoveContainer" containerID="1b71fcffdab278cb45fc8a692b7028786e1d0591779ffb95e789597695cef64d" Dec 02 10:31:18 crc kubenswrapper[4711]: E1202 10:31:18.983678 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b71fcffdab278cb45fc8a692b7028786e1d0591779ffb95e789597695cef64d\": container with ID starting with 1b71fcffdab278cb45fc8a692b7028786e1d0591779ffb95e789597695cef64d not found: ID does not exist" containerID="1b71fcffdab278cb45fc8a692b7028786e1d0591779ffb95e789597695cef64d" Dec 02 10:31:18 crc kubenswrapper[4711]: I1202 10:31:18.983772 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b71fcffdab278cb45fc8a692b7028786e1d0591779ffb95e789597695cef64d"} err="failed to get container status \"1b71fcffdab278cb45fc8a692b7028786e1d0591779ffb95e789597695cef64d\": rpc error: code = NotFound desc = could not find container \"1b71fcffdab278cb45fc8a692b7028786e1d0591779ffb95e789597695cef64d\": container with ID starting with 1b71fcffdab278cb45fc8a692b7028786e1d0591779ffb95e789597695cef64d not found: ID does not exist" Dec 02 10:31:19 crc kubenswrapper[4711]: I1202 10:31:19.090713 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40d28340-4369-46ef-9765-0c362b6fdb81" path="/var/lib/kubelet/pods/40d28340-4369-46ef-9765-0c362b6fdb81/volumes" Dec 02 10:31:19 crc kubenswrapper[4711]: I1202 10:31:19.267157 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q57lb" event={"ID":"7ce53b33-b78a-446d-b345-c8d918209ddf","Type":"ContainerStarted","Data":"eae7924579f3f43ae30e7c7e4e4bf6d258623d24784ffc571091856c4778acce"} Dec 02 10:31:19 crc kubenswrapper[4711]: I1202 10:31:19.267497 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-q57lb" Dec 02 10:31:19 crc kubenswrapper[4711]: I1202 10:31:19.269083 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed","Type":"ContainerStarted","Data":"7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e"} Dec 02 10:31:19 crc kubenswrapper[4711]: I1202 10:31:19.270018 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 02 10:31:19 crc kubenswrapper[4711]: I1202 10:31:19.271550 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7fff7494-ee8a-4c45-87de-00444f64be54","Type":"ContainerStarted","Data":"7fc0cd31124b71d0f3fa04f71b92c577556e775b8a816058fb874ad72ed748e1"} Dec 02 10:31:19 crc kubenswrapper[4711]: I1202 10:31:19.274266 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"99032f62-533c-4fa2-887c-41a25a505906","Type":"ContainerStarted","Data":"ad4fabf786064b06d6711430e6565aac0c6926d1ff89b640fc644cad48cd7f28"} Dec 02 10:31:19 crc kubenswrapper[4711]: I1202 10:31:19.296298 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-q57lb" podStartSLOduration=8.327683355 podStartE2EDuration="17.296269372s" podCreationTimestamp="2025-12-02 10:31:02 +0000 UTC" firstStartedPulling="2025-12-02 10:31:07.461920299 +0000 UTC m=+1057.171286746" lastFinishedPulling="2025-12-02 10:31:16.430506316 +0000 UTC m=+1066.139872763" observedRunningTime="2025-12-02 10:31:19.28561275 +0000 UTC m=+1068.994979217" watchObservedRunningTime="2025-12-02 10:31:19.296269372 +0000 UTC m=+1069.005635819" Dec 02 10:31:19 crc kubenswrapper[4711]: I1202 10:31:19.313705 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.739823529 podStartE2EDuration="22.313680491s" podCreationTimestamp="2025-12-02 10:30:57 +0000 UTC" firstStartedPulling="2025-12-02 10:31:07.46014083 +0000 UTC m=+1057.169507267" lastFinishedPulling="2025-12-02 10:31:19.033997782 +0000 UTC m=+1068.743364229" observedRunningTime="2025-12-02 10:31:19.300930961 +0000 UTC m=+1069.010297418" watchObservedRunningTime="2025-12-02 10:31:19.313680491 +0000 UTC m=+1069.023046948" Dec 02 10:31:20 crc kubenswrapper[4711]: I1202 10:31:20.285914 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lxtbd" event={"ID":"82b00f57-beb4-43ad-a1c5-cc9790bb167e","Type":"ContainerStarted","Data":"465516e1dfcf575a198d89a0e5a3618802f4f2a4504d5a7cbf9ba7e549ab527e"} Dec 02 10:31:20 crc kubenswrapper[4711]: I1202 10:31:20.286354 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lxtbd" event={"ID":"82b00f57-beb4-43ad-a1c5-cc9790bb167e","Type":"ContainerStarted","Data":"e1d944046ea6e9b8eaea38deb28828481b96fb4ce3b0b0b376d99e505f5a57ed"} Dec 02 10:31:20 crc kubenswrapper[4711]: I1202 10:31:20.286395 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:20 crc kubenswrapper[4711]: I1202 10:31:20.286415 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:20 crc kubenswrapper[4711]: I1202 10:31:20.288175 4711 generic.go:334] "Generic (PLEG): container finished" podID="1de720bf-9fe1-40cb-888c-1868fbc89f63" containerID="028bbb30b123df3b804102065db81d2141165f00d82fea6d79c16e881a91b553" exitCode=0 Dec 02 10:31:20 crc kubenswrapper[4711]: I1202 10:31:20.288299 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1de720bf-9fe1-40cb-888c-1868fbc89f63","Type":"ContainerDied","Data":"028bbb30b123df3b804102065db81d2141165f00d82fea6d79c16e881a91b553"} Dec 02 10:31:20 crc kubenswrapper[4711]: I1202 10:31:20.316036 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-lxtbd" podStartSLOduration=10.16761519 podStartE2EDuration="18.316015878s" podCreationTimestamp="2025-12-02 10:31:02 +0000 UTC" firstStartedPulling="2025-12-02 10:31:08.301776859 +0000 UTC m=+1058.011143306" lastFinishedPulling="2025-12-02 10:31:16.450177547 +0000 UTC m=+1066.159543994" observedRunningTime="2025-12-02 10:31:20.309200551 +0000 UTC m=+1070.018566998" watchObservedRunningTime="2025-12-02 10:31:20.316015878 +0000 UTC m=+1070.025382315" Dec 02 10:31:22 crc kubenswrapper[4711]: I1202 10:31:22.310267 4711 generic.go:334] "Generic (PLEG): container finished" podID="12dcc0fa-368d-4a71-99ee-fe27e2cd410a" containerID="072d38d88882a436a4280b46e6a8d6f2c68614957fbf57df2c2de463757c1c8f" exitCode=0 Dec 02 10:31:22 crc kubenswrapper[4711]: I1202 10:31:22.310375 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"12dcc0fa-368d-4a71-99ee-fe27e2cd410a","Type":"ContainerDied","Data":"072d38d88882a436a4280b46e6a8d6f2c68614957fbf57df2c2de463757c1c8f"} Dec 02 10:31:23 crc kubenswrapper[4711]: I1202 10:31:23.321460 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1de720bf-9fe1-40cb-888c-1868fbc89f63","Type":"ContainerStarted","Data":"9ada31d248ceefbd9892076e1025559d8d1f1e2d2daf0af198a19af306ea5972"} Dec 02 10:31:23 crc kubenswrapper[4711]: I1202 10:31:23.325559 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"12dcc0fa-368d-4a71-99ee-fe27e2cd410a","Type":"ContainerStarted","Data":"514e67c699e8b7b46e0bed2fe48a0cb37ae87786f071aeab5a538c7b7a4fe2ff"} Dec 02 10:31:23 crc kubenswrapper[4711]: I1202 10:31:23.330079 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"99032f62-533c-4fa2-887c-41a25a505906","Type":"ContainerStarted","Data":"5f1c10743c36a34ab41060d73c7b8b6e11c15bd576de2e96765de31e7265d9a4"} Dec 02 10:31:23 crc kubenswrapper[4711]: I1202 10:31:23.332179 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7fff7494-ee8a-4c45-87de-00444f64be54","Type":"ContainerStarted","Data":"be30d250fedd8b13ac4add2bf1d8b69cba71dd6ade393f13af2bf3ef3717c07c"} Dec 02 10:31:23 crc kubenswrapper[4711]: I1202 10:31:23.350215 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.443088028 podStartE2EDuration="29.350197875s" podCreationTimestamp="2025-12-02 10:30:54 +0000 UTC" firstStartedPulling="2025-12-02 10:31:05.303290723 +0000 UTC m=+1055.012657170" lastFinishedPulling="2025-12-02 10:31:14.21040053 +0000 UTC m=+1063.919767017" observedRunningTime="2025-12-02 10:31:23.348955451 +0000 UTC m=+1073.058321908" watchObservedRunningTime="2025-12-02 10:31:23.350197875 +0000 UTC m=+1073.059564322" Dec 02 10:31:23 crc kubenswrapper[4711]: I1202 10:31:23.363360 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:23 crc kubenswrapper[4711]: I1202 10:31:23.375037 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.078393286 podStartE2EDuration="19.375014706s" podCreationTimestamp="2025-12-02 10:31:04 +0000 UTC" firstStartedPulling="2025-12-02 10:31:07.607687326 +0000 UTC m=+1057.317053773" lastFinishedPulling="2025-12-02 10:31:22.904308746 +0000 UTC m=+1072.613675193" observedRunningTime="2025-12-02 10:31:23.367815959 +0000 UTC m=+1073.077182416" watchObservedRunningTime="2025-12-02 10:31:23.375014706 +0000 UTC m=+1073.084381163" Dec 02 10:31:23 crc kubenswrapper[4711]: I1202 10:31:23.398927 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.12383779 podStartE2EDuration="23.398901864s" podCreationTimestamp="2025-12-02 10:31:00 +0000 UTC" firstStartedPulling="2025-12-02 10:31:08.698826445 +0000 UTC m=+1058.408192892" lastFinishedPulling="2025-12-02 10:31:22.973890519 +0000 UTC m=+1072.683256966" observedRunningTime="2025-12-02 10:31:23.394854933 +0000 UTC m=+1073.104221390" watchObservedRunningTime="2025-12-02 10:31:23.398901864 +0000 UTC m=+1073.108268321" Dec 02 10:31:23 crc kubenswrapper[4711]: I1202 10:31:23.427651 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:23 crc kubenswrapper[4711]: I1202 10:31:23.432008 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.44423099 podStartE2EDuration="31.431969773s" podCreationTimestamp="2025-12-02 10:30:52 +0000 UTC" firstStartedPulling="2025-12-02 10:31:07.460815228 +0000 UTC m=+1057.170181675" lastFinishedPulling="2025-12-02 10:31:16.448554011 +0000 UTC m=+1066.157920458" observedRunningTime="2025-12-02 10:31:23.422025709 +0000 UTC m=+1073.131392196" watchObservedRunningTime="2025-12-02 10:31:23.431969773 +0000 UTC m=+1073.141336230" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.144922 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.145653 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.343959 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.444284 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.741697 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l2bpw"] Dec 02 10:31:24 crc kubenswrapper[4711]: E1202 10:31:24.742022 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d28340-4369-46ef-9765-0c362b6fdb81" containerName="init" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.742071 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d28340-4369-46ef-9765-0c362b6fdb81" containerName="init" Dec 02 10:31:24 crc kubenswrapper[4711]: E1202 10:31:24.742090 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d28340-4369-46ef-9765-0c362b6fdb81" containerName="dnsmasq-dns" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.742096 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d28340-4369-46ef-9765-0c362b6fdb81" containerName="dnsmasq-dns" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.742245 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="40d28340-4369-46ef-9765-0c362b6fdb81" containerName="dnsmasq-dns" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.743065 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.745085 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.758316 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l2bpw"] Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.834142 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-7vpwc"] Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.835386 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.848302 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.868886 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7vpwc"] Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.918876 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4a2d3ff8-c766-478e-9fae-105cd7432c09-ovs-rundir\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.918958 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9hk\" (UniqueName: \"kubernetes.io/projected/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-kube-api-access-lm9hk\") pod \"dnsmasq-dns-6bc7876d45-l2bpw\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.919059 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-l2bpw\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.919095 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4a2d3ff8-c766-478e-9fae-105cd7432c09-ovn-rundir\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.919125 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-config\") pod \"dnsmasq-dns-6bc7876d45-l2bpw\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.919153 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-l2bpw\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.919190 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2d3ff8-c766-478e-9fae-105cd7432c09-combined-ca-bundle\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.919220 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx4dq\" (UniqueName: \"kubernetes.io/projected/4a2d3ff8-c766-478e-9fae-105cd7432c09-kube-api-access-wx4dq\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.919297 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2d3ff8-c766-478e-9fae-105cd7432c09-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:24 crc kubenswrapper[4711]: I1202 10:31:24.919349 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a2d3ff8-c766-478e-9fae-105cd7432c09-config\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.020535 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4a2d3ff8-c766-478e-9fae-105cd7432c09-ovs-rundir\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.020593 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm9hk\" (UniqueName: \"kubernetes.io/projected/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-kube-api-access-lm9hk\") pod \"dnsmasq-dns-6bc7876d45-l2bpw\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.020619 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-l2bpw\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.020643 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4a2d3ff8-c766-478e-9fae-105cd7432c09-ovn-rundir\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.020664 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-config\") pod \"dnsmasq-dns-6bc7876d45-l2bpw\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.020691 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-l2bpw\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.020726 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2d3ff8-c766-478e-9fae-105cd7432c09-combined-ca-bundle\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.020753 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx4dq\" (UniqueName: \"kubernetes.io/projected/4a2d3ff8-c766-478e-9fae-105cd7432c09-kube-api-access-wx4dq\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.020824 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2d3ff8-c766-478e-9fae-105cd7432c09-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.020898 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a2d3ff8-c766-478e-9fae-105cd7432c09-config\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.020909 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4a2d3ff8-c766-478e-9fae-105cd7432c09-ovs-rundir\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.020937 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4a2d3ff8-c766-478e-9fae-105cd7432c09-ovn-rundir\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.021573 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-l2bpw\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.021611 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-config\") pod \"dnsmasq-dns-6bc7876d45-l2bpw\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.021661 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a2d3ff8-c766-478e-9fae-105cd7432c09-config\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.022071 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-l2bpw\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.035496 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2d3ff8-c766-478e-9fae-105cd7432c09-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.037157 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2d3ff8-c766-478e-9fae-105cd7432c09-combined-ca-bundle\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.037387 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm9hk\" (UniqueName: \"kubernetes.io/projected/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-kube-api-access-lm9hk\") pod \"dnsmasq-dns-6bc7876d45-l2bpw\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.065020 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx4dq\" (UniqueName: \"kubernetes.io/projected/4a2d3ff8-c766-478e-9fae-105cd7432c09-kube-api-access-wx4dq\") pod \"ovn-controller-metrics-7vpwc\" (UID: \"4a2d3ff8-c766-478e-9fae-105cd7432c09\") " pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.068999 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.185336 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7vpwc" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.206228 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l2bpw"] Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.245597 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-5bb7t"] Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.246893 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.253317 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.257032 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-5bb7t"] Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.328065 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-config\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.328122 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.328177 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.328206 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-dns-svc\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.328258 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz5bw\" (UniqueName: \"kubernetes.io/projected/93f5a3d9-0a18-4721-b50f-71103bb72b43-kube-api-access-zz5bw\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.429383 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-dns-svc\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.429467 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz5bw\" (UniqueName: \"kubernetes.io/projected/93f5a3d9-0a18-4721-b50f-71103bb72b43-kube-api-access-zz5bw\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.429501 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-config\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.429522 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.429569 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.430341 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.430843 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-dns-svc\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.431612 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-config\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.432131 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.513866 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz5bw\" (UniqueName: \"kubernetes.io/projected/93f5a3d9-0a18-4721-b50f-71103bb72b43-kube-api-access-zz5bw\") pod \"dnsmasq-dns-8554648995-5bb7t\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.579443 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.596036 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l2bpw"] Dec 02 10:31:25 crc kubenswrapper[4711]: W1202 10:31:25.607286 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97df6d47_7ed0_4f99_b0e1_d2a9d4257820.slice/crio-ebc67634c96b9977e90969bf5d11dbbea6a72db2cd2d5abe8425acca14dcfc90 WatchSource:0}: Error finding container ebc67634c96b9977e90969bf5d11dbbea6a72db2cd2d5abe8425acca14dcfc90: Status 404 returned error can't find the container with id ebc67634c96b9977e90969bf5d11dbbea6a72db2cd2d5abe8425acca14dcfc90 Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.622458 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.622611 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.768066 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7vpwc"] Dec 02 10:31:25 crc kubenswrapper[4711]: W1202 10:31:25.772290 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a2d3ff8_c766_478e_9fae_105cd7432c09.slice/crio-ba4f49a717853109cc54d642834970b418a5ec6faad27460d3be866301d95993 WatchSource:0}: Error finding container ba4f49a717853109cc54d642834970b418a5ec6faad27460d3be866301d95993: Status 404 returned error can't find the container with id ba4f49a717853109cc54d642834970b418a5ec6faad27460d3be866301d95993 Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.874931 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 02 10:31:25 crc kubenswrapper[4711]: I1202 10:31:25.949262 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.000867 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.084540 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-5bb7t"] Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.380913 4711 generic.go:334] "Generic (PLEG): container finished" podID="93f5a3d9-0a18-4721-b50f-71103bb72b43" containerID="ab5bfaebe7e573df4b34d24ad23d96494039f666e95e536722e9576f5330e879" exitCode=0 Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.380991 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-5bb7t" event={"ID":"93f5a3d9-0a18-4721-b50f-71103bb72b43","Type":"ContainerDied","Data":"ab5bfaebe7e573df4b34d24ad23d96494039f666e95e536722e9576f5330e879"} Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.381017 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-5bb7t" event={"ID":"93f5a3d9-0a18-4721-b50f-71103bb72b43","Type":"ContainerStarted","Data":"6f36793d770760b7a3c4d3120779547a4234a089ccb279445f3e848af199f960"} Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.391176 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7vpwc" event={"ID":"4a2d3ff8-c766-478e-9fae-105cd7432c09","Type":"ContainerStarted","Data":"79e8382ccae858d026226bbfadab9199fc7da2c0ba5623fb735f57b9af4e6176"} Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.391560 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7vpwc" event={"ID":"4a2d3ff8-c766-478e-9fae-105cd7432c09","Type":"ContainerStarted","Data":"ba4f49a717853109cc54d642834970b418a5ec6faad27460d3be866301d95993"} Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.398377 4711 generic.go:334] "Generic (PLEG): container finished" podID="97df6d47-7ed0-4f99-b0e1-d2a9d4257820" containerID="488d29ce8596679de2a73c2ff5d7620a2b088115d4407a66e818098c78b59312" exitCode=0 Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.399096 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" event={"ID":"97df6d47-7ed0-4f99-b0e1-d2a9d4257820","Type":"ContainerDied","Data":"488d29ce8596679de2a73c2ff5d7620a2b088115d4407a66e818098c78b59312"} Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.399156 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" event={"ID":"97df6d47-7ed0-4f99-b0e1-d2a9d4257820","Type":"ContainerStarted","Data":"ebc67634c96b9977e90969bf5d11dbbea6a72db2cd2d5abe8425acca14dcfc90"} Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.399779 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.479128 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-7vpwc" podStartSLOduration=2.479106556 podStartE2EDuration="2.479106556s" podCreationTimestamp="2025-12-02 10:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:31:26.443169218 +0000 UTC m=+1076.152535685" watchObservedRunningTime="2025-12-02 10:31:26.479106556 +0000 UTC m=+1076.188473013" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.479670 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.656119 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.663430 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.668198 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.668433 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.668568 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.668687 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-hq57h" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.675129 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.760017 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.760703 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hq55\" (UniqueName: \"kubernetes.io/projected/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-kube-api-access-8hq55\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.760739 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.760775 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.760830 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-config\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.760853 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-scripts\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.760867 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.761146 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.862886 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm9hk\" (UniqueName: \"kubernetes.io/projected/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-kube-api-access-lm9hk\") pod \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.863257 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-ovsdbserver-sb\") pod \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.863348 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-dns-svc\") pod \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.863412 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-config\") pod \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\" (UID: \"97df6d47-7ed0-4f99-b0e1-d2a9d4257820\") " Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.863712 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hq55\" (UniqueName: \"kubernetes.io/projected/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-kube-api-access-8hq55\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.863766 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.863862 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.863920 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-config\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.864146 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-scripts\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.864523 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.864581 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.864874 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-config\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.864916 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-scripts\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.865265 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.867175 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-kube-api-access-lm9hk" (OuterVolumeSpecName: "kube-api-access-lm9hk") pod "97df6d47-7ed0-4f99-b0e1-d2a9d4257820" (UID: "97df6d47-7ed0-4f99-b0e1-d2a9d4257820"). InnerVolumeSpecName "kube-api-access-lm9hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.867328 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.867842 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.869869 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.882831 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "97df6d47-7ed0-4f99-b0e1-d2a9d4257820" (UID: "97df6d47-7ed0-4f99-b0e1-d2a9d4257820"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.883463 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hq55\" (UniqueName: \"kubernetes.io/projected/19183ef0-1a98-4d60-96c3-2b15fd8bd2e8-kube-api-access-8hq55\") pod \"ovn-northd-0\" (UID: \"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8\") " pod="openstack/ovn-northd-0" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.883665 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-config" (OuterVolumeSpecName: "config") pod "97df6d47-7ed0-4f99-b0e1-d2a9d4257820" (UID: "97df6d47-7ed0-4f99-b0e1-d2a9d4257820"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.889915 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "97df6d47-7ed0-4f99-b0e1-d2a9d4257820" (UID: "97df6d47-7ed0-4f99-b0e1-d2a9d4257820"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.965573 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm9hk\" (UniqueName: \"kubernetes.io/projected/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-kube-api-access-lm9hk\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.965607 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.965616 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.965626 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97df6d47-7ed0-4f99-b0e1-d2a9d4257820-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:26 crc kubenswrapper[4711]: I1202 10:31:26.999588 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 02 10:31:27 crc kubenswrapper[4711]: I1202 10:31:27.407965 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-5bb7t" event={"ID":"93f5a3d9-0a18-4721-b50f-71103bb72b43","Type":"ContainerStarted","Data":"9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e"} Dec 02 10:31:27 crc kubenswrapper[4711]: I1202 10:31:27.408628 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:27 crc kubenswrapper[4711]: I1202 10:31:27.410759 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" Dec 02 10:31:27 crc kubenswrapper[4711]: I1202 10:31:27.411213 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-l2bpw" event={"ID":"97df6d47-7ed0-4f99-b0e1-d2a9d4257820","Type":"ContainerDied","Data":"ebc67634c96b9977e90969bf5d11dbbea6a72db2cd2d5abe8425acca14dcfc90"} Dec 02 10:31:27 crc kubenswrapper[4711]: I1202 10:31:27.411240 4711 scope.go:117] "RemoveContainer" containerID="488d29ce8596679de2a73c2ff5d7620a2b088115d4407a66e818098c78b59312" Dec 02 10:31:27 crc kubenswrapper[4711]: I1202 10:31:27.434372 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 10:31:27 crc kubenswrapper[4711]: I1202 10:31:27.437450 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-5bb7t" podStartSLOduration=2.437425543 podStartE2EDuration="2.437425543s" podCreationTimestamp="2025-12-02 10:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:31:27.426961094 +0000 UTC m=+1077.136327541" watchObservedRunningTime="2025-12-02 10:31:27.437425543 +0000 UTC m=+1077.146791990" Dec 02 10:31:27 crc kubenswrapper[4711]: W1202 10:31:27.446364 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19183ef0_1a98_4d60_96c3_2b15fd8bd2e8.slice/crio-e2bbc1530a5b865e9eba53f120baffe642475546c9ca52097a186c8b9d3ceeec WatchSource:0}: Error finding container e2bbc1530a5b865e9eba53f120baffe642475546c9ca52097a186c8b9d3ceeec: Status 404 returned error can't find the container with id e2bbc1530a5b865e9eba53f120baffe642475546c9ca52097a186c8b9d3ceeec Dec 02 10:31:27 crc kubenswrapper[4711]: I1202 10:31:27.467481 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l2bpw"] Dec 02 10:31:27 crc kubenswrapper[4711]: I1202 10:31:27.489303 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l2bpw"] Dec 02 10:31:27 crc kubenswrapper[4711]: I1202 10:31:27.945479 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 02 10:31:27 crc kubenswrapper[4711]: I1202 10:31:27.984716 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-5bb7t"] Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.012732 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-kl96n"] Dec 02 10:31:28 crc kubenswrapper[4711]: E1202 10:31:28.013099 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97df6d47-7ed0-4f99-b0e1-d2a9d4257820" containerName="init" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.013116 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="97df6d47-7ed0-4f99-b0e1-d2a9d4257820" containerName="init" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.013306 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="97df6d47-7ed0-4f99-b0e1-d2a9d4257820" containerName="init" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.014167 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.036077 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-kl96n"] Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.146986 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.217282 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.217343 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.217847 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pqhk\" (UniqueName: \"kubernetes.io/projected/21313175-93c7-4c32-b581-c77b63cea062-kube-api-access-2pqhk\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.217912 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-config\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.217999 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.221726 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.326034 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.326129 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.326202 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.329015 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.329081 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.329466 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pqhk\" (UniqueName: \"kubernetes.io/projected/21313175-93c7-4c32-b581-c77b63cea062-kube-api-access-2pqhk\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.329502 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-config\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.329673 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.330073 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-config\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.357875 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pqhk\" (UniqueName: \"kubernetes.io/projected/21313175-93c7-4c32-b581-c77b63cea062-kube-api-access-2pqhk\") pod \"dnsmasq-dns-b8fbc5445-kl96n\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.418692 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8","Type":"ContainerStarted","Data":"e2bbc1530a5b865e9eba53f120baffe642475546c9ca52097a186c8b9d3ceeec"} Dec 02 10:31:28 crc kubenswrapper[4711]: I1202 10:31:28.633366 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.092274 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97df6d47-7ed0-4f99-b0e1-d2a9d4257820" path="/var/lib/kubelet/pods/97df6d47-7ed0-4f99-b0e1-d2a9d4257820/volumes" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.127422 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.137394 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.137444 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.144222 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-5lszm" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.144371 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.144540 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.144568 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.192083 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-kl96n"] Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.246874 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.246935 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/23030cd9-0bb2-4574-8c49-405bef4719b5-cache\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.246960 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/23030cd9-0bb2-4574-8c49-405bef4719b5-lock\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.247027 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjgr6\" (UniqueName: \"kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-kube-api-access-tjgr6\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.247066 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.348435 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.348900 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.348940 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/23030cd9-0bb2-4574-8c49-405bef4719b5-cache\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.348987 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/23030cd9-0bb2-4574-8c49-405bef4719b5-lock\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.349055 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjgr6\" (UniqueName: \"kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-kube-api-access-tjgr6\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: E1202 10:31:29.348651 4711 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 10:31:29 crc kubenswrapper[4711]: E1202 10:31:29.349193 4711 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 10:31:29 crc kubenswrapper[4711]: E1202 10:31:29.349321 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift podName:23030cd9-0bb2-4574-8c49-405bef4719b5 nodeName:}" failed. No retries permitted until 2025-12-02 10:31:29.849245653 +0000 UTC m=+1079.558612100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift") pod "swift-storage-0" (UID: "23030cd9-0bb2-4574-8c49-405bef4719b5") : configmap "swift-ring-files" not found Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.349733 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.350309 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/23030cd9-0bb2-4574-8c49-405bef4719b5-lock\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.350310 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/23030cd9-0bb2-4574-8c49-405bef4719b5-cache\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.368179 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjgr6\" (UniqueName: \"kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-kube-api-access-tjgr6\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.380559 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.429444 4711 generic.go:334] "Generic (PLEG): container finished" podID="21313175-93c7-4c32-b581-c77b63cea062" containerID="5a1ec9d459dcfd3975177c7b1cb543458adc5ba2b1ca30f1f0413575ccd99ed0" exitCode=0 Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.429518 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" event={"ID":"21313175-93c7-4c32-b581-c77b63cea062","Type":"ContainerDied","Data":"5a1ec9d459dcfd3975177c7b1cb543458adc5ba2b1ca30f1f0413575ccd99ed0"} Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.429553 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" event={"ID":"21313175-93c7-4c32-b581-c77b63cea062","Type":"ContainerStarted","Data":"6791d63f670dfe2c8c794ec9853f3a2ad1c0880b084cf939e3fdb2a0c2d98763"} Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.439533 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8","Type":"ContainerStarted","Data":"71236c30099ee6c7b02f4e5bd4a8cf8bf29c17dea7c980e86718d0bcbca95099"} Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.439623 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"19183ef0-1a98-4d60-96c3-2b15fd8bd2e8","Type":"ContainerStarted","Data":"4260b10093ad3f368b3dd7776eac0798b7fa5823e5ef5d20551257ccc779f8ca"} Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.439734 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.439579 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-5bb7t" podUID="93f5a3d9-0a18-4721-b50f-71103bb72b43" containerName="dnsmasq-dns" containerID="cri-o://9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e" gracePeriod=10 Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.547025 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.233776126 podStartE2EDuration="3.54700592s" podCreationTimestamp="2025-12-02 10:31:26 +0000 UTC" firstStartedPulling="2025-12-02 10:31:27.447570101 +0000 UTC m=+1077.156936548" lastFinishedPulling="2025-12-02 10:31:28.760799895 +0000 UTC m=+1078.470166342" observedRunningTime="2025-12-02 10:31:29.522771894 +0000 UTC m=+1079.232138341" watchObservedRunningTime="2025-12-02 10:31:29.54700592 +0000 UTC m=+1079.256372357" Dec 02 10:31:29 crc kubenswrapper[4711]: I1202 10:31:29.862158 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:29 crc kubenswrapper[4711]: E1202 10:31:29.862330 4711 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 10:31:29 crc kubenswrapper[4711]: E1202 10:31:29.862343 4711 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 10:31:29 crc kubenswrapper[4711]: E1202 10:31:29.862388 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift podName:23030cd9-0bb2-4574-8c49-405bef4719b5 nodeName:}" failed. No retries permitted until 2025-12-02 10:31:30.86237343 +0000 UTC m=+1080.571739877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift") pod "swift-storage-0" (UID: "23030cd9-0bb2-4574-8c49-405bef4719b5") : configmap "swift-ring-files" not found Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.277687 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.372517 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.440074 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.448370 4711 generic.go:334] "Generic (PLEG): container finished" podID="93f5a3d9-0a18-4721-b50f-71103bb72b43" containerID="9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e" exitCode=0 Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.448423 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-5bb7t" event={"ID":"93f5a3d9-0a18-4721-b50f-71103bb72b43","Type":"ContainerDied","Data":"9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e"} Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.448448 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-5bb7t" event={"ID":"93f5a3d9-0a18-4721-b50f-71103bb72b43","Type":"ContainerDied","Data":"6f36793d770760b7a3c4d3120779547a4234a089ccb279445f3e848af199f960"} Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.448461 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-5bb7t" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.448467 4711 scope.go:117] "RemoveContainer" containerID="9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.450346 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" event={"ID":"21313175-93c7-4c32-b581-c77b63cea062","Type":"ContainerStarted","Data":"e935f5145a6791ee54709d5061ea46b4b21b18f7d91a3c05d9f33ff26942c4ee"} Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.450434 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.473948 4711 scope.go:117] "RemoveContainer" containerID="ab5bfaebe7e573df4b34d24ad23d96494039f666e95e536722e9576f5330e879" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.474388 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-ovsdbserver-sb\") pod \"93f5a3d9-0a18-4721-b50f-71103bb72b43\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.474516 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-dns-svc\") pod \"93f5a3d9-0a18-4721-b50f-71103bb72b43\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.474544 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-ovsdbserver-nb\") pod \"93f5a3d9-0a18-4721-b50f-71103bb72b43\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.474575 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-config\") pod \"93f5a3d9-0a18-4721-b50f-71103bb72b43\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.474603 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz5bw\" (UniqueName: \"kubernetes.io/projected/93f5a3d9-0a18-4721-b50f-71103bb72b43-kube-api-access-zz5bw\") pod \"93f5a3d9-0a18-4721-b50f-71103bb72b43\" (UID: \"93f5a3d9-0a18-4721-b50f-71103bb72b43\") " Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.482562 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" podStartSLOduration=3.482546741 podStartE2EDuration="3.482546741s" podCreationTimestamp="2025-12-02 10:31:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:31:30.481392119 +0000 UTC m=+1080.190758556" watchObservedRunningTime="2025-12-02 10:31:30.482546741 +0000 UTC m=+1080.191913188" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.502023 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f5a3d9-0a18-4721-b50f-71103bb72b43-kube-api-access-zz5bw" (OuterVolumeSpecName: "kube-api-access-zz5bw") pod "93f5a3d9-0a18-4721-b50f-71103bb72b43" (UID: "93f5a3d9-0a18-4721-b50f-71103bb72b43"). InnerVolumeSpecName "kube-api-access-zz5bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.517633 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-config" (OuterVolumeSpecName: "config") pod "93f5a3d9-0a18-4721-b50f-71103bb72b43" (UID: "93f5a3d9-0a18-4721-b50f-71103bb72b43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.519526 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93f5a3d9-0a18-4721-b50f-71103bb72b43" (UID: "93f5a3d9-0a18-4721-b50f-71103bb72b43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.523462 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93f5a3d9-0a18-4721-b50f-71103bb72b43" (UID: "93f5a3d9-0a18-4721-b50f-71103bb72b43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.537378 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93f5a3d9-0a18-4721-b50f-71103bb72b43" (UID: "93f5a3d9-0a18-4721-b50f-71103bb72b43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.577096 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.577122 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.577131 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.577141 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f5a3d9-0a18-4721-b50f-71103bb72b43-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.577150 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz5bw\" (UniqueName: \"kubernetes.io/projected/93f5a3d9-0a18-4721-b50f-71103bb72b43-kube-api-access-zz5bw\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.580061 4711 scope.go:117] "RemoveContainer" containerID="9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e" Dec 02 10:31:30 crc kubenswrapper[4711]: E1202 10:31:30.580453 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e\": container with ID starting with 9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e not found: ID does not exist" containerID="9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.580501 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e"} err="failed to get container status \"9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e\": rpc error: code = NotFound desc = could not find container \"9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e\": container with ID starting with 9217e14e28e9902eb1f1e8338c837779d85f2524e6a367e948dd889a1040064e not found: ID does not exist" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.580530 4711 scope.go:117] "RemoveContainer" containerID="ab5bfaebe7e573df4b34d24ad23d96494039f666e95e536722e9576f5330e879" Dec 02 10:31:30 crc kubenswrapper[4711]: E1202 10:31:30.580806 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab5bfaebe7e573df4b34d24ad23d96494039f666e95e536722e9576f5330e879\": container with ID starting with ab5bfaebe7e573df4b34d24ad23d96494039f666e95e536722e9576f5330e879 not found: ID does not exist" containerID="ab5bfaebe7e573df4b34d24ad23d96494039f666e95e536722e9576f5330e879" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.580844 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5bfaebe7e573df4b34d24ad23d96494039f666e95e536722e9576f5330e879"} err="failed to get container status \"ab5bfaebe7e573df4b34d24ad23d96494039f666e95e536722e9576f5330e879\": rpc error: code = NotFound desc = could not find container \"ab5bfaebe7e573df4b34d24ad23d96494039f666e95e536722e9576f5330e879\": container with ID starting with ab5bfaebe7e573df4b34d24ad23d96494039f666e95e536722e9576f5330e879 not found: ID does not exist" Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.780549 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-5bb7t"] Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.788511 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-5bb7t"] Dec 02 10:31:30 crc kubenswrapper[4711]: I1202 10:31:30.882181 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:30 crc kubenswrapper[4711]: E1202 10:31:30.882436 4711 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 10:31:30 crc kubenswrapper[4711]: E1202 10:31:30.882458 4711 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 10:31:30 crc kubenswrapper[4711]: E1202 10:31:30.882513 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift podName:23030cd9-0bb2-4574-8c49-405bef4719b5 nodeName:}" failed. No retries permitted until 2025-12-02 10:31:32.882492465 +0000 UTC m=+1082.591858922 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift") pod "swift-storage-0" (UID: "23030cd9-0bb2-4574-8c49-405bef4719b5") : configmap "swift-ring-files" not found Dec 02 10:31:31 crc kubenswrapper[4711]: I1202 10:31:31.091903 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f5a3d9-0a18-4721-b50f-71103bb72b43" path="/var/lib/kubelet/pods/93f5a3d9-0a18-4721-b50f-71103bb72b43/volumes" Dec 02 10:31:32 crc kubenswrapper[4711]: I1202 10:31:32.920159 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:32 crc kubenswrapper[4711]: E1202 10:31:32.920572 4711 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 10:31:32 crc kubenswrapper[4711]: E1202 10:31:32.920716 4711 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 10:31:32 crc kubenswrapper[4711]: E1202 10:31:32.920772 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift podName:23030cd9-0bb2-4574-8c49-405bef4719b5 nodeName:}" failed. No retries permitted until 2025-12-02 10:31:36.920756062 +0000 UTC m=+1086.630122499 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift") pod "swift-storage-0" (UID: "23030cd9-0bb2-4574-8c49-405bef4719b5") : configmap "swift-ring-files" not found Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.113017 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-fmv9w"] Dec 02 10:31:33 crc kubenswrapper[4711]: E1202 10:31:33.113510 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f5a3d9-0a18-4721-b50f-71103bb72b43" containerName="init" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.113574 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f5a3d9-0a18-4721-b50f-71103bb72b43" containerName="init" Dec 02 10:31:33 crc kubenswrapper[4711]: E1202 10:31:33.113636 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f5a3d9-0a18-4721-b50f-71103bb72b43" containerName="dnsmasq-dns" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.113687 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f5a3d9-0a18-4721-b50f-71103bb72b43" containerName="dnsmasq-dns" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.113913 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f5a3d9-0a18-4721-b50f-71103bb72b43" containerName="dnsmasq-dns" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.114497 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.116564 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.117297 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.125000 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.152755 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-fmv9w"] Dec 02 10:31:33 crc kubenswrapper[4711]: E1202 10:31:33.153519 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-dlb4h ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-fmv9w" podUID="d620a68a-0628-44b3-95e3-93c33ef5d9e0" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.161663 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-58jtp"] Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.162794 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.176344 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-58jtp"] Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.182328 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-fmv9w"] Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.225823 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d620a68a-0628-44b3-95e3-93c33ef5d9e0-etc-swift\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.225896 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-swiftconf\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.226019 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d620a68a-0628-44b3-95e3-93c33ef5d9e0-scripts\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.226044 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-dispersionconf\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.226067 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-combined-ca-bundle\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.226086 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d620a68a-0628-44b3-95e3-93c33ef5d9e0-ring-data-devices\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.226103 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlb4h\" (UniqueName: \"kubernetes.io/projected/d620a68a-0628-44b3-95e3-93c33ef5d9e0-kube-api-access-dlb4h\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.327549 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d620a68a-0628-44b3-95e3-93c33ef5d9e0-etc-swift\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.327596 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/848cb525-39ab-47d7-99fc-9fbc249e740a-etc-swift\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.327634 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-swiftconf\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.327656 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/848cb525-39ab-47d7-99fc-9fbc249e740a-scripts\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.327896 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-swiftconf\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.328038 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d620a68a-0628-44b3-95e3-93c33ef5d9e0-scripts\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.328248 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-dispersionconf\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.328317 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-combined-ca-bundle\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.328341 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d620a68a-0628-44b3-95e3-93c33ef5d9e0-ring-data-devices\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.328258 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d620a68a-0628-44b3-95e3-93c33ef5d9e0-etc-swift\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.328369 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlb4h\" (UniqueName: \"kubernetes.io/projected/d620a68a-0628-44b3-95e3-93c33ef5d9e0-kube-api-access-dlb4h\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.328442 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/848cb525-39ab-47d7-99fc-9fbc249e740a-ring-data-devices\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.328497 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb9mx\" (UniqueName: \"kubernetes.io/projected/848cb525-39ab-47d7-99fc-9fbc249e740a-kube-api-access-bb9mx\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.328542 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-dispersionconf\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.328565 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-combined-ca-bundle\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.328997 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d620a68a-0628-44b3-95e3-93c33ef5d9e0-ring-data-devices\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.329232 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d620a68a-0628-44b3-95e3-93c33ef5d9e0-scripts\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.333313 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-swiftconf\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.334169 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-dispersionconf\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.342988 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-combined-ca-bundle\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.358034 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlb4h\" (UniqueName: \"kubernetes.io/projected/d620a68a-0628-44b3-95e3-93c33ef5d9e0-kube-api-access-dlb4h\") pod \"swift-ring-rebalance-fmv9w\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.430154 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb9mx\" (UniqueName: \"kubernetes.io/projected/848cb525-39ab-47d7-99fc-9fbc249e740a-kube-api-access-bb9mx\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.430433 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-dispersionconf\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.430540 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-combined-ca-bundle\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.430912 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/848cb525-39ab-47d7-99fc-9fbc249e740a-etc-swift\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.431051 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/848cb525-39ab-47d7-99fc-9fbc249e740a-scripts\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.431203 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-swiftconf\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.431539 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/848cb525-39ab-47d7-99fc-9fbc249e740a-etc-swift\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.431746 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/848cb525-39ab-47d7-99fc-9fbc249e740a-scripts\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.431907 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/848cb525-39ab-47d7-99fc-9fbc249e740a-ring-data-devices\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.432349 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/848cb525-39ab-47d7-99fc-9fbc249e740a-ring-data-devices\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.435550 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-dispersionconf\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.435605 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-combined-ca-bundle\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.437025 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-swiftconf\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.456846 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb9mx\" (UniqueName: \"kubernetes.io/projected/848cb525-39ab-47d7-99fc-9fbc249e740a-kube-api-access-bb9mx\") pod \"swift-ring-rebalance-58jtp\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.483430 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.483552 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-5lszm" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.493718 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.561029 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.736740 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d620a68a-0628-44b3-95e3-93c33ef5d9e0-etc-swift\") pod \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.736855 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-swiftconf\") pod \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.736881 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d620a68a-0628-44b3-95e3-93c33ef5d9e0-ring-data-devices\") pod \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.736905 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-combined-ca-bundle\") pod \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.736978 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d620a68a-0628-44b3-95e3-93c33ef5d9e0-scripts\") pod \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.737108 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlb4h\" (UniqueName: \"kubernetes.io/projected/d620a68a-0628-44b3-95e3-93c33ef5d9e0-kube-api-access-dlb4h\") pod \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.737155 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d620a68a-0628-44b3-95e3-93c33ef5d9e0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d620a68a-0628-44b3-95e3-93c33ef5d9e0" (UID: "d620a68a-0628-44b3-95e3-93c33ef5d9e0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.737174 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-dispersionconf\") pod \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\" (UID: \"d620a68a-0628-44b3-95e3-93c33ef5d9e0\") " Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.737577 4711 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d620a68a-0628-44b3-95e3-93c33ef5d9e0-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.741209 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d620a68a-0628-44b3-95e3-93c33ef5d9e0-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d620a68a-0628-44b3-95e3-93c33ef5d9e0" (UID: "d620a68a-0628-44b3-95e3-93c33ef5d9e0"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.741675 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d620a68a-0628-44b3-95e3-93c33ef5d9e0" (UID: "d620a68a-0628-44b3-95e3-93c33ef5d9e0"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.741695 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d620a68a-0628-44b3-95e3-93c33ef5d9e0-scripts" (OuterVolumeSpecName: "scripts") pod "d620a68a-0628-44b3-95e3-93c33ef5d9e0" (UID: "d620a68a-0628-44b3-95e3-93c33ef5d9e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.742220 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d620a68a-0628-44b3-95e3-93c33ef5d9e0" (UID: "d620a68a-0628-44b3-95e3-93c33ef5d9e0"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.742391 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d620a68a-0628-44b3-95e3-93c33ef5d9e0" (UID: "d620a68a-0628-44b3-95e3-93c33ef5d9e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.744782 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d620a68a-0628-44b3-95e3-93c33ef5d9e0-kube-api-access-dlb4h" (OuterVolumeSpecName: "kube-api-access-dlb4h") pod "d620a68a-0628-44b3-95e3-93c33ef5d9e0" (UID: "d620a68a-0628-44b3-95e3-93c33ef5d9e0"). InnerVolumeSpecName "kube-api-access-dlb4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.840102 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlb4h\" (UniqueName: \"kubernetes.io/projected/d620a68a-0628-44b3-95e3-93c33ef5d9e0-kube-api-access-dlb4h\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.840385 4711 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.840398 4711 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.840409 4711 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d620a68a-0628-44b3-95e3-93c33ef5d9e0-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.840420 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620a68a-0628-44b3-95e3-93c33ef5d9e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.840431 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d620a68a-0628-44b3-95e3-93c33ef5d9e0-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:33 crc kubenswrapper[4711]: I1202 10:31:33.967085 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-58jtp"] Dec 02 10:31:33 crc kubenswrapper[4711]: W1202 10:31:33.974775 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod848cb525_39ab_47d7_99fc_9fbc249e740a.slice/crio-438282a1d2d92fb312fdd2297ecd5cc238c25d0bcf72200bb34474bc6e815be1 WatchSource:0}: Error finding container 438282a1d2d92fb312fdd2297ecd5cc238c25d0bcf72200bb34474bc6e815be1: Status 404 returned error can't find the container with id 438282a1d2d92fb312fdd2297ecd5cc238c25d0bcf72200bb34474bc6e815be1 Dec 02 10:31:34 crc kubenswrapper[4711]: I1202 10:31:34.492393 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fmv9w" Dec 02 10:31:34 crc kubenswrapper[4711]: I1202 10:31:34.492392 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-58jtp" event={"ID":"848cb525-39ab-47d7-99fc-9fbc249e740a","Type":"ContainerStarted","Data":"438282a1d2d92fb312fdd2297ecd5cc238c25d0bcf72200bb34474bc6e815be1"} Dec 02 10:31:34 crc kubenswrapper[4711]: I1202 10:31:34.542873 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-fmv9w"] Dec 02 10:31:34 crc kubenswrapper[4711]: I1202 10:31:34.550566 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-fmv9w"] Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.095449 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d620a68a-0628-44b3-95e3-93c33ef5d9e0" path="/var/lib/kubelet/pods/d620a68a-0628-44b3-95e3-93c33ef5d9e0/volumes" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.536773 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f853-account-create-update-zk2bx"] Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.537901 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f853-account-create-update-zk2bx" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.540048 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.557694 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-tsgjc"] Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.558967 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tsgjc" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.577701 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f853-account-create-update-zk2bx"] Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.586262 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tsgjc"] Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.669009 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31e044e2-2341-4cf1-8669-9beba7eec45c-operator-scripts\") pod \"keystone-f853-account-create-update-zk2bx\" (UID: \"31e044e2-2341-4cf1-8669-9beba7eec45c\") " pod="openstack/keystone-f853-account-create-update-zk2bx" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.669295 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swlcs\" (UniqueName: \"kubernetes.io/projected/78e039b8-78fb-43a0-9ab0-7d3a6dc43198-kube-api-access-swlcs\") pod \"keystone-db-create-tsgjc\" (UID: \"78e039b8-78fb-43a0-9ab0-7d3a6dc43198\") " pod="openstack/keystone-db-create-tsgjc" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.669423 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78e039b8-78fb-43a0-9ab0-7d3a6dc43198-operator-scripts\") pod \"keystone-db-create-tsgjc\" (UID: \"78e039b8-78fb-43a0-9ab0-7d3a6dc43198\") " pod="openstack/keystone-db-create-tsgjc" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.669445 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4lv9\" (UniqueName: \"kubernetes.io/projected/31e044e2-2341-4cf1-8669-9beba7eec45c-kube-api-access-v4lv9\") pod \"keystone-f853-account-create-update-zk2bx\" (UID: \"31e044e2-2341-4cf1-8669-9beba7eec45c\") " pod="openstack/keystone-f853-account-create-update-zk2bx" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.758094 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vjxm2"] Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.759611 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vjxm2" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.772135 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swlcs\" (UniqueName: \"kubernetes.io/projected/78e039b8-78fb-43a0-9ab0-7d3a6dc43198-kube-api-access-swlcs\") pod \"keystone-db-create-tsgjc\" (UID: \"78e039b8-78fb-43a0-9ab0-7d3a6dc43198\") " pod="openstack/keystone-db-create-tsgjc" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.772783 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78e039b8-78fb-43a0-9ab0-7d3a6dc43198-operator-scripts\") pod \"keystone-db-create-tsgjc\" (UID: \"78e039b8-78fb-43a0-9ab0-7d3a6dc43198\") " pod="openstack/keystone-db-create-tsgjc" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.772846 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4lv9\" (UniqueName: \"kubernetes.io/projected/31e044e2-2341-4cf1-8669-9beba7eec45c-kube-api-access-v4lv9\") pod \"keystone-f853-account-create-update-zk2bx\" (UID: \"31e044e2-2341-4cf1-8669-9beba7eec45c\") " pod="openstack/keystone-f853-account-create-update-zk2bx" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.773117 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31e044e2-2341-4cf1-8669-9beba7eec45c-operator-scripts\") pod \"keystone-f853-account-create-update-zk2bx\" (UID: \"31e044e2-2341-4cf1-8669-9beba7eec45c\") " pod="openstack/keystone-f853-account-create-update-zk2bx" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.774234 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78e039b8-78fb-43a0-9ab0-7d3a6dc43198-operator-scripts\") pod \"keystone-db-create-tsgjc\" (UID: \"78e039b8-78fb-43a0-9ab0-7d3a6dc43198\") " pod="openstack/keystone-db-create-tsgjc" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.774280 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31e044e2-2341-4cf1-8669-9beba7eec45c-operator-scripts\") pod \"keystone-f853-account-create-update-zk2bx\" (UID: \"31e044e2-2341-4cf1-8669-9beba7eec45c\") " pod="openstack/keystone-f853-account-create-update-zk2bx" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.776919 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vjxm2"] Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.800684 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4lv9\" (UniqueName: \"kubernetes.io/projected/31e044e2-2341-4cf1-8669-9beba7eec45c-kube-api-access-v4lv9\") pod \"keystone-f853-account-create-update-zk2bx\" (UID: \"31e044e2-2341-4cf1-8669-9beba7eec45c\") " pod="openstack/keystone-f853-account-create-update-zk2bx" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.801250 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swlcs\" (UniqueName: \"kubernetes.io/projected/78e039b8-78fb-43a0-9ab0-7d3a6dc43198-kube-api-access-swlcs\") pod \"keystone-db-create-tsgjc\" (UID: \"78e039b8-78fb-43a0-9ab0-7d3a6dc43198\") " pod="openstack/keystone-db-create-tsgjc" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.863655 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f853-account-create-update-zk2bx" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.867310 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d47a-account-create-update-r7rzm"] Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.868674 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d47a-account-create-update-r7rzm" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.871105 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.873400 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tsgjc" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.874613 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f13da34c-a52d-4dde-8514-0ddb2cac7f4c-operator-scripts\") pod \"placement-db-create-vjxm2\" (UID: \"f13da34c-a52d-4dde-8514-0ddb2cac7f4c\") " pod="openstack/placement-db-create-vjxm2" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.874659 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f9tb\" (UniqueName: \"kubernetes.io/projected/f13da34c-a52d-4dde-8514-0ddb2cac7f4c-kube-api-access-8f9tb\") pod \"placement-db-create-vjxm2\" (UID: \"f13da34c-a52d-4dde-8514-0ddb2cac7f4c\") " pod="openstack/placement-db-create-vjxm2" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.877463 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d47a-account-create-update-r7rzm"] Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.976530 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8-operator-scripts\") pod \"placement-d47a-account-create-update-r7rzm\" (UID: \"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8\") " pod="openstack/placement-d47a-account-create-update-r7rzm" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.976625 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntnx5\" (UniqueName: \"kubernetes.io/projected/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8-kube-api-access-ntnx5\") pod \"placement-d47a-account-create-update-r7rzm\" (UID: \"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8\") " pod="openstack/placement-d47a-account-create-update-r7rzm" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.976721 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f13da34c-a52d-4dde-8514-0ddb2cac7f4c-operator-scripts\") pod \"placement-db-create-vjxm2\" (UID: \"f13da34c-a52d-4dde-8514-0ddb2cac7f4c\") " pod="openstack/placement-db-create-vjxm2" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.976758 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f9tb\" (UniqueName: \"kubernetes.io/projected/f13da34c-a52d-4dde-8514-0ddb2cac7f4c-kube-api-access-8f9tb\") pod \"placement-db-create-vjxm2\" (UID: \"f13da34c-a52d-4dde-8514-0ddb2cac7f4c\") " pod="openstack/placement-db-create-vjxm2" Dec 02 10:31:35 crc kubenswrapper[4711]: I1202 10:31:35.978075 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f13da34c-a52d-4dde-8514-0ddb2cac7f4c-operator-scripts\") pod \"placement-db-create-vjxm2\" (UID: \"f13da34c-a52d-4dde-8514-0ddb2cac7f4c\") " pod="openstack/placement-db-create-vjxm2" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.018153 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f9tb\" (UniqueName: \"kubernetes.io/projected/f13da34c-a52d-4dde-8514-0ddb2cac7f4c-kube-api-access-8f9tb\") pod \"placement-db-create-vjxm2\" (UID: \"f13da34c-a52d-4dde-8514-0ddb2cac7f4c\") " pod="openstack/placement-db-create-vjxm2" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.063690 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fv2p5"] Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.067390 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fv2p5" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.078750 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8-operator-scripts\") pod \"placement-d47a-account-create-update-r7rzm\" (UID: \"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8\") " pod="openstack/placement-d47a-account-create-update-r7rzm" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.078846 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntnx5\" (UniqueName: \"kubernetes.io/projected/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8-kube-api-access-ntnx5\") pod \"placement-d47a-account-create-update-r7rzm\" (UID: \"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8\") " pod="openstack/placement-d47a-account-create-update-r7rzm" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.079385 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fv2p5"] Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.079566 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8-operator-scripts\") pod \"placement-d47a-account-create-update-r7rzm\" (UID: \"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8\") " pod="openstack/placement-d47a-account-create-update-r7rzm" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.084096 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vjxm2" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.099549 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntnx5\" (UniqueName: \"kubernetes.io/projected/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8-kube-api-access-ntnx5\") pod \"placement-d47a-account-create-update-r7rzm\" (UID: \"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8\") " pod="openstack/placement-d47a-account-create-update-r7rzm" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.171450 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9162-account-create-update-vnj4b"] Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.172496 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9162-account-create-update-vnj4b" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.174582 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.177679 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9162-account-create-update-vnj4b"] Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.179869 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4vzc\" (UniqueName: \"kubernetes.io/projected/b9486b59-2bf0-492f-84e1-0a832e7b366c-kube-api-access-h4vzc\") pod \"glance-db-create-fv2p5\" (UID: \"b9486b59-2bf0-492f-84e1-0a832e7b366c\") " pod="openstack/glance-db-create-fv2p5" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.180056 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9486b59-2bf0-492f-84e1-0a832e7b366c-operator-scripts\") pod \"glance-db-create-fv2p5\" (UID: \"b9486b59-2bf0-492f-84e1-0a832e7b366c\") " pod="openstack/glance-db-create-fv2p5" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.272185 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d47a-account-create-update-r7rzm" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.281975 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4vzc\" (UniqueName: \"kubernetes.io/projected/b9486b59-2bf0-492f-84e1-0a832e7b366c-kube-api-access-h4vzc\") pod \"glance-db-create-fv2p5\" (UID: \"b9486b59-2bf0-492f-84e1-0a832e7b366c\") " pod="openstack/glance-db-create-fv2p5" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.282061 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8588\" (UniqueName: \"kubernetes.io/projected/006b5b7a-0ef9-442a-9e52-462f5ef784ee-kube-api-access-g8588\") pod \"glance-9162-account-create-update-vnj4b\" (UID: \"006b5b7a-0ef9-442a-9e52-462f5ef784ee\") " pod="openstack/glance-9162-account-create-update-vnj4b" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.282097 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9486b59-2bf0-492f-84e1-0a832e7b366c-operator-scripts\") pod \"glance-db-create-fv2p5\" (UID: \"b9486b59-2bf0-492f-84e1-0a832e7b366c\") " pod="openstack/glance-db-create-fv2p5" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.282123 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006b5b7a-0ef9-442a-9e52-462f5ef784ee-operator-scripts\") pod \"glance-9162-account-create-update-vnj4b\" (UID: \"006b5b7a-0ef9-442a-9e52-462f5ef784ee\") " pod="openstack/glance-9162-account-create-update-vnj4b" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.282742 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9486b59-2bf0-492f-84e1-0a832e7b366c-operator-scripts\") pod \"glance-db-create-fv2p5\" (UID: \"b9486b59-2bf0-492f-84e1-0a832e7b366c\") " pod="openstack/glance-db-create-fv2p5" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.298773 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4vzc\" (UniqueName: \"kubernetes.io/projected/b9486b59-2bf0-492f-84e1-0a832e7b366c-kube-api-access-h4vzc\") pod \"glance-db-create-fv2p5\" (UID: \"b9486b59-2bf0-492f-84e1-0a832e7b366c\") " pod="openstack/glance-db-create-fv2p5" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.373152 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tsgjc"] Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.379022 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f853-account-create-update-zk2bx"] Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.384338 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8588\" (UniqueName: \"kubernetes.io/projected/006b5b7a-0ef9-442a-9e52-462f5ef784ee-kube-api-access-g8588\") pod \"glance-9162-account-create-update-vnj4b\" (UID: \"006b5b7a-0ef9-442a-9e52-462f5ef784ee\") " pod="openstack/glance-9162-account-create-update-vnj4b" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.384459 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006b5b7a-0ef9-442a-9e52-462f5ef784ee-operator-scripts\") pod \"glance-9162-account-create-update-vnj4b\" (UID: \"006b5b7a-0ef9-442a-9e52-462f5ef784ee\") " pod="openstack/glance-9162-account-create-update-vnj4b" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.385314 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006b5b7a-0ef9-442a-9e52-462f5ef784ee-operator-scripts\") pod \"glance-9162-account-create-update-vnj4b\" (UID: \"006b5b7a-0ef9-442a-9e52-462f5ef784ee\") " pod="openstack/glance-9162-account-create-update-vnj4b" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.418201 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fv2p5" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.420322 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8588\" (UniqueName: \"kubernetes.io/projected/006b5b7a-0ef9-442a-9e52-462f5ef784ee-kube-api-access-g8588\") pod \"glance-9162-account-create-update-vnj4b\" (UID: \"006b5b7a-0ef9-442a-9e52-462f5ef784ee\") " pod="openstack/glance-9162-account-create-update-vnj4b" Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.493134 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9162-account-create-update-vnj4b" Dec 02 10:31:36 crc kubenswrapper[4711]: W1202 10:31:36.811921 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78e039b8_78fb_43a0_9ab0_7d3a6dc43198.slice/crio-091fe46167695c3688446bd74588ba6caef910c44556aea30d36571f05799b15 WatchSource:0}: Error finding container 091fe46167695c3688446bd74588ba6caef910c44556aea30d36571f05799b15: Status 404 returned error can't find the container with id 091fe46167695c3688446bd74588ba6caef910c44556aea30d36571f05799b15 Dec 02 10:31:36 crc kubenswrapper[4711]: I1202 10:31:36.995091 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:36 crc kubenswrapper[4711]: E1202 10:31:36.995356 4711 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 10:31:36 crc kubenswrapper[4711]: E1202 10:31:36.995400 4711 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 10:31:36 crc kubenswrapper[4711]: E1202 10:31:36.995482 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift podName:23030cd9-0bb2-4574-8c49-405bef4719b5 nodeName:}" failed. No retries permitted until 2025-12-02 10:31:44.995457076 +0000 UTC m=+1094.704823523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift") pod "swift-storage-0" (UID: "23030cd9-0bb2-4574-8c49-405bef4719b5") : configmap "swift-ring-files" not found Dec 02 10:31:37 crc kubenswrapper[4711]: I1202 10:31:37.535404 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tsgjc" event={"ID":"78e039b8-78fb-43a0-9ab0-7d3a6dc43198","Type":"ContainerStarted","Data":"091fe46167695c3688446bd74588ba6caef910c44556aea30d36571f05799b15"} Dec 02 10:31:37 crc kubenswrapper[4711]: W1202 10:31:37.947771 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31e044e2_2341_4cf1_8669_9beba7eec45c.slice/crio-e5615d3bd9ccfe6761a8b07f1c9793959c696f6140abea0f324e25760c9dc445 WatchSource:0}: Error finding container e5615d3bd9ccfe6761a8b07f1c9793959c696f6140abea0f324e25760c9dc445: Status 404 returned error can't find the container with id e5615d3bd9ccfe6761a8b07f1c9793959c696f6140abea0f324e25760c9dc445 Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.473689 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d47a-account-create-update-r7rzm"] Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.480011 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9162-account-create-update-vnj4b"] Dec 02 10:31:38 crc kubenswrapper[4711]: W1202 10:31:38.485477 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08062a70_e0c0_4bd0_b8e0_ab0a85d486a8.slice/crio-509593e2924f7b0eee19e04c6e4a4de0829d3977974508385936305f2db59248 WatchSource:0}: Error finding container 509593e2924f7b0eee19e04c6e4a4de0829d3977974508385936305f2db59248: Status 404 returned error can't find the container with id 509593e2924f7b0eee19e04c6e4a4de0829d3977974508385936305f2db59248 Dec 02 10:31:38 crc kubenswrapper[4711]: W1202 10:31:38.489221 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod006b5b7a_0ef9_442a_9e52_462f5ef784ee.slice/crio-885a09e0a2045dc7dd124d7652eedd3b46b67bb794219a6ef5979a011dad3738 WatchSource:0}: Error finding container 885a09e0a2045dc7dd124d7652eedd3b46b67bb794219a6ef5979a011dad3738: Status 404 returned error can't find the container with id 885a09e0a2045dc7dd124d7652eedd3b46b67bb794219a6ef5979a011dad3738 Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.496535 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vjxm2"] Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.543323 4711 generic.go:334] "Generic (PLEG): container finished" podID="31e044e2-2341-4cf1-8669-9beba7eec45c" containerID="92eff58edfde5b823784837d2d5a0e2e4cb3e59b132d4bc70132de6e5538c8ac" exitCode=0 Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.543379 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f853-account-create-update-zk2bx" event={"ID":"31e044e2-2341-4cf1-8669-9beba7eec45c","Type":"ContainerDied","Data":"92eff58edfde5b823784837d2d5a0e2e4cb3e59b132d4bc70132de6e5538c8ac"} Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.543403 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f853-account-create-update-zk2bx" event={"ID":"31e044e2-2341-4cf1-8669-9beba7eec45c","Type":"ContainerStarted","Data":"e5615d3bd9ccfe6761a8b07f1c9793959c696f6140abea0f324e25760c9dc445"} Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.545005 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9162-account-create-update-vnj4b" event={"ID":"006b5b7a-0ef9-442a-9e52-462f5ef784ee","Type":"ContainerStarted","Data":"885a09e0a2045dc7dd124d7652eedd3b46b67bb794219a6ef5979a011dad3738"} Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.547195 4711 generic.go:334] "Generic (PLEG): container finished" podID="78e039b8-78fb-43a0-9ab0-7d3a6dc43198" containerID="d090711390aa5aff883abb4f0488ec192a0764d961208bf53c98a4e2a59648d1" exitCode=0 Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.547247 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tsgjc" event={"ID":"78e039b8-78fb-43a0-9ab0-7d3a6dc43198","Type":"ContainerDied","Data":"d090711390aa5aff883abb4f0488ec192a0764d961208bf53c98a4e2a59648d1"} Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.548426 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-58jtp" event={"ID":"848cb525-39ab-47d7-99fc-9fbc249e740a","Type":"ContainerStarted","Data":"5ddbda90f04a7235341ce17c331945f40965dee7a228a39d4daaf502ee924175"} Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.550345 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d47a-account-create-update-r7rzm" event={"ID":"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8","Type":"ContainerStarted","Data":"509593e2924f7b0eee19e04c6e4a4de0829d3977974508385936305f2db59248"} Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.583422 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-58jtp" podStartSLOduration=1.8240866489999998 podStartE2EDuration="5.583403042s" podCreationTimestamp="2025-12-02 10:31:33 +0000 UTC" firstStartedPulling="2025-12-02 10:31:34.282939212 +0000 UTC m=+1083.992305659" lastFinishedPulling="2025-12-02 10:31:38.042255595 +0000 UTC m=+1087.751622052" observedRunningTime="2025-12-02 10:31:38.582161189 +0000 UTC m=+1088.291527646" watchObservedRunningTime="2025-12-02 10:31:38.583403042 +0000 UTC m=+1088.292769499" Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.610093 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fv2p5"] Dec 02 10:31:38 crc kubenswrapper[4711]: W1202 10:31:38.616243 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf13da34c_a52d_4dde_8514_0ddb2cac7f4c.slice/crio-beb11d88b60c8bd937ca668738f610ac12d92552d6d38ce790f3565af648af87 WatchSource:0}: Error finding container beb11d88b60c8bd937ca668738f610ac12d92552d6d38ce790f3565af648af87: Status 404 returned error can't find the container with id beb11d88b60c8bd937ca668738f610ac12d92552d6d38ce790f3565af648af87 Dec 02 10:31:38 crc kubenswrapper[4711]: W1202 10:31:38.618430 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9486b59_2bf0_492f_84e1_0a832e7b366c.slice/crio-ede6322f581ab903945693228b11265fd3d4f3c8b4b0b2f8b46f3879a0e95581 WatchSource:0}: Error finding container ede6322f581ab903945693228b11265fd3d4f3c8b4b0b2f8b46f3879a0e95581: Status 404 returned error can't find the container with id ede6322f581ab903945693228b11265fd3d4f3c8b4b0b2f8b46f3879a0e95581 Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.635192 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.767109 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gz2qs"] Dec 02 10:31:38 crc kubenswrapper[4711]: I1202 10:31:38.767346 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" podUID="afd67557-1628-47e8-b608-720eca21e334" containerName="dnsmasq-dns" containerID="cri-o://fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154" gracePeriod=10 Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.216574 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.370311 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afd67557-1628-47e8-b608-720eca21e334-dns-svc\") pod \"afd67557-1628-47e8-b608-720eca21e334\" (UID: \"afd67557-1628-47e8-b608-720eca21e334\") " Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.370385 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afd67557-1628-47e8-b608-720eca21e334-config\") pod \"afd67557-1628-47e8-b608-720eca21e334\" (UID: \"afd67557-1628-47e8-b608-720eca21e334\") " Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.370479 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqjz2\" (UniqueName: \"kubernetes.io/projected/afd67557-1628-47e8-b608-720eca21e334-kube-api-access-kqjz2\") pod \"afd67557-1628-47e8-b608-720eca21e334\" (UID: \"afd67557-1628-47e8-b608-720eca21e334\") " Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.376171 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd67557-1628-47e8-b608-720eca21e334-kube-api-access-kqjz2" (OuterVolumeSpecName: "kube-api-access-kqjz2") pod "afd67557-1628-47e8-b608-720eca21e334" (UID: "afd67557-1628-47e8-b608-720eca21e334"). InnerVolumeSpecName "kube-api-access-kqjz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.427716 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd67557-1628-47e8-b608-720eca21e334-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "afd67557-1628-47e8-b608-720eca21e334" (UID: "afd67557-1628-47e8-b608-720eca21e334"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.429364 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd67557-1628-47e8-b608-720eca21e334-config" (OuterVolumeSpecName: "config") pod "afd67557-1628-47e8-b608-720eca21e334" (UID: "afd67557-1628-47e8-b608-720eca21e334"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.472134 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqjz2\" (UniqueName: \"kubernetes.io/projected/afd67557-1628-47e8-b608-720eca21e334-kube-api-access-kqjz2\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.472171 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afd67557-1628-47e8-b608-720eca21e334-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.472185 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afd67557-1628-47e8-b608-720eca21e334-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.562571 4711 generic.go:334] "Generic (PLEG): container finished" podID="afd67557-1628-47e8-b608-720eca21e334" containerID="fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154" exitCode=0 Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.562626 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.562665 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" event={"ID":"afd67557-1628-47e8-b608-720eca21e334","Type":"ContainerDied","Data":"fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154"} Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.562709 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gz2qs" event={"ID":"afd67557-1628-47e8-b608-720eca21e334","Type":"ContainerDied","Data":"676c357d32799df2df8f85473a0bc59f1e9273e80635e740e49965d09c4513e1"} Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.562732 4711 scope.go:117] "RemoveContainer" containerID="fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.564691 4711 generic.go:334] "Generic (PLEG): container finished" podID="006b5b7a-0ef9-442a-9e52-462f5ef784ee" containerID="56e7c9ca0248e1785a00316996609f3abee131645d899af82cfda258f40629dd" exitCode=0 Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.564781 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9162-account-create-update-vnj4b" event={"ID":"006b5b7a-0ef9-442a-9e52-462f5ef784ee","Type":"ContainerDied","Data":"56e7c9ca0248e1785a00316996609f3abee131645d899af82cfda258f40629dd"} Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.567488 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fv2p5" event={"ID":"b9486b59-2bf0-492f-84e1-0a832e7b366c","Type":"ContainerStarted","Data":"a23865f833befa176ed225d08d57883ae1c18cbd1a75d9a79dde95c3cece8ad6"} Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.567528 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fv2p5" event={"ID":"b9486b59-2bf0-492f-84e1-0a832e7b366c","Type":"ContainerStarted","Data":"ede6322f581ab903945693228b11265fd3d4f3c8b4b0b2f8b46f3879a0e95581"} Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.575540 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vjxm2" event={"ID":"f13da34c-a52d-4dde-8514-0ddb2cac7f4c","Type":"ContainerStarted","Data":"1ccdd062e522abbec1944a8ee7148a542590712f2e9d5e34803bb78ee2f462b8"} Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.575615 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vjxm2" event={"ID":"f13da34c-a52d-4dde-8514-0ddb2cac7f4c","Type":"ContainerStarted","Data":"beb11d88b60c8bd937ca668738f610ac12d92552d6d38ce790f3565af648af87"} Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.579458 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d47a-account-create-update-r7rzm" event={"ID":"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8","Type":"ContainerStarted","Data":"316b3c207af77a1cdf7f8488ac89dcfb306bf4a18c8b078b30f9d746054fcf33"} Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.603800 4711 scope.go:117] "RemoveContainer" containerID="49204fa08d892d62b5043ab169715143669a4036facb1f5abc7d884d5fe8999b" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.620614 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-fv2p5" podStartSLOduration=3.620594067 podStartE2EDuration="3.620594067s" podCreationTimestamp="2025-12-02 10:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:31:39.614234663 +0000 UTC m=+1089.323601160" watchObservedRunningTime="2025-12-02 10:31:39.620594067 +0000 UTC m=+1089.329960514" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.640851 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gz2qs"] Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.649705 4711 scope.go:117] "RemoveContainer" containerID="fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154" Dec 02 10:31:39 crc kubenswrapper[4711]: E1202 10:31:39.650596 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154\": container with ID starting with fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154 not found: ID does not exist" containerID="fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.650620 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154"} err="failed to get container status \"fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154\": rpc error: code = NotFound desc = could not find container \"fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154\": container with ID starting with fc155a7fe45e07f7f9cfbd01d1bd551143947903ea2563b14aee2cb2b24c7154 not found: ID does not exist" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.650641 4711 scope.go:117] "RemoveContainer" containerID="49204fa08d892d62b5043ab169715143669a4036facb1f5abc7d884d5fe8999b" Dec 02 10:31:39 crc kubenswrapper[4711]: E1202 10:31:39.651012 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49204fa08d892d62b5043ab169715143669a4036facb1f5abc7d884d5fe8999b\": container with ID starting with 49204fa08d892d62b5043ab169715143669a4036facb1f5abc7d884d5fe8999b not found: ID does not exist" containerID="49204fa08d892d62b5043ab169715143669a4036facb1f5abc7d884d5fe8999b" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.651032 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49204fa08d892d62b5043ab169715143669a4036facb1f5abc7d884d5fe8999b"} err="failed to get container status \"49204fa08d892d62b5043ab169715143669a4036facb1f5abc7d884d5fe8999b\": rpc error: code = NotFound desc = could not find container \"49204fa08d892d62b5043ab169715143669a4036facb1f5abc7d884d5fe8999b\": container with ID starting with 49204fa08d892d62b5043ab169715143669a4036facb1f5abc7d884d5fe8999b not found: ID does not exist" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.656150 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gz2qs"] Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.656772 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-vjxm2" podStartSLOduration=4.656733221 podStartE2EDuration="4.656733221s" podCreationTimestamp="2025-12-02 10:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:31:39.648658408 +0000 UTC m=+1089.358024855" watchObservedRunningTime="2025-12-02 10:31:39.656733221 +0000 UTC m=+1089.366099658" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.668969 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d47a-account-create-update-r7rzm" podStartSLOduration=4.668935576 podStartE2EDuration="4.668935576s" podCreationTimestamp="2025-12-02 10:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:31:39.661032409 +0000 UTC m=+1089.370398846" watchObservedRunningTime="2025-12-02 10:31:39.668935576 +0000 UTC m=+1089.378302023" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.908351 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tsgjc" Dec 02 10:31:39 crc kubenswrapper[4711]: I1202 10:31:39.934284 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f853-account-create-update-zk2bx" Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.085918 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31e044e2-2341-4cf1-8669-9beba7eec45c-operator-scripts\") pod \"31e044e2-2341-4cf1-8669-9beba7eec45c\" (UID: \"31e044e2-2341-4cf1-8669-9beba7eec45c\") " Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.086217 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78e039b8-78fb-43a0-9ab0-7d3a6dc43198-operator-scripts\") pod \"78e039b8-78fb-43a0-9ab0-7d3a6dc43198\" (UID: \"78e039b8-78fb-43a0-9ab0-7d3a6dc43198\") " Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.086293 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swlcs\" (UniqueName: \"kubernetes.io/projected/78e039b8-78fb-43a0-9ab0-7d3a6dc43198-kube-api-access-swlcs\") pod \"78e039b8-78fb-43a0-9ab0-7d3a6dc43198\" (UID: \"78e039b8-78fb-43a0-9ab0-7d3a6dc43198\") " Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.086384 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4lv9\" (UniqueName: \"kubernetes.io/projected/31e044e2-2341-4cf1-8669-9beba7eec45c-kube-api-access-v4lv9\") pod \"31e044e2-2341-4cf1-8669-9beba7eec45c\" (UID: \"31e044e2-2341-4cf1-8669-9beba7eec45c\") " Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.086394 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e044e2-2341-4cf1-8669-9beba7eec45c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31e044e2-2341-4cf1-8669-9beba7eec45c" (UID: "31e044e2-2341-4cf1-8669-9beba7eec45c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.086658 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e039b8-78fb-43a0-9ab0-7d3a6dc43198-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78e039b8-78fb-43a0-9ab0-7d3a6dc43198" (UID: "78e039b8-78fb-43a0-9ab0-7d3a6dc43198"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.086881 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31e044e2-2341-4cf1-8669-9beba7eec45c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.086900 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78e039b8-78fb-43a0-9ab0-7d3a6dc43198-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.090738 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e044e2-2341-4cf1-8669-9beba7eec45c-kube-api-access-v4lv9" (OuterVolumeSpecName: "kube-api-access-v4lv9") pod "31e044e2-2341-4cf1-8669-9beba7eec45c" (UID: "31e044e2-2341-4cf1-8669-9beba7eec45c"). InnerVolumeSpecName "kube-api-access-v4lv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.091336 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e039b8-78fb-43a0-9ab0-7d3a6dc43198-kube-api-access-swlcs" (OuterVolumeSpecName: "kube-api-access-swlcs") pod "78e039b8-78fb-43a0-9ab0-7d3a6dc43198" (UID: "78e039b8-78fb-43a0-9ab0-7d3a6dc43198"). InnerVolumeSpecName "kube-api-access-swlcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.188549 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swlcs\" (UniqueName: \"kubernetes.io/projected/78e039b8-78fb-43a0-9ab0-7d3a6dc43198-kube-api-access-swlcs\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.188605 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4lv9\" (UniqueName: \"kubernetes.io/projected/31e044e2-2341-4cf1-8669-9beba7eec45c-kube-api-access-v4lv9\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.592976 4711 generic.go:334] "Generic (PLEG): container finished" podID="08062a70-e0c0-4bd0-b8e0-ab0a85d486a8" containerID="316b3c207af77a1cdf7f8488ac89dcfb306bf4a18c8b078b30f9d746054fcf33" exitCode=0 Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.593068 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d47a-account-create-update-r7rzm" event={"ID":"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8","Type":"ContainerDied","Data":"316b3c207af77a1cdf7f8488ac89dcfb306bf4a18c8b078b30f9d746054fcf33"} Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.596380 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f853-account-create-update-zk2bx" event={"ID":"31e044e2-2341-4cf1-8669-9beba7eec45c","Type":"ContainerDied","Data":"e5615d3bd9ccfe6761a8b07f1c9793959c696f6140abea0f324e25760c9dc445"} Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.596440 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5615d3bd9ccfe6761a8b07f1c9793959c696f6140abea0f324e25760c9dc445" Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.596402 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f853-account-create-update-zk2bx" Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.601229 4711 generic.go:334] "Generic (PLEG): container finished" podID="b9486b59-2bf0-492f-84e1-0a832e7b366c" containerID="a23865f833befa176ed225d08d57883ae1c18cbd1a75d9a79dde95c3cece8ad6" exitCode=0 Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.601295 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fv2p5" event={"ID":"b9486b59-2bf0-492f-84e1-0a832e7b366c","Type":"ContainerDied","Data":"a23865f833befa176ed225d08d57883ae1c18cbd1a75d9a79dde95c3cece8ad6"} Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.603344 4711 generic.go:334] "Generic (PLEG): container finished" podID="f13da34c-a52d-4dde-8514-0ddb2cac7f4c" containerID="1ccdd062e522abbec1944a8ee7148a542590712f2e9d5e34803bb78ee2f462b8" exitCode=0 Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.603475 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vjxm2" event={"ID":"f13da34c-a52d-4dde-8514-0ddb2cac7f4c","Type":"ContainerDied","Data":"1ccdd062e522abbec1944a8ee7148a542590712f2e9d5e34803bb78ee2f462b8"} Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.605444 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tsgjc" Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.605510 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tsgjc" event={"ID":"78e039b8-78fb-43a0-9ab0-7d3a6dc43198","Type":"ContainerDied","Data":"091fe46167695c3688446bd74588ba6caef910c44556aea30d36571f05799b15"} Dec 02 10:31:40 crc kubenswrapper[4711]: I1202 10:31:40.605541 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="091fe46167695c3688446bd74588ba6caef910c44556aea30d36571f05799b15" Dec 02 10:31:41 crc kubenswrapper[4711]: I1202 10:31:41.070083 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9162-account-create-update-vnj4b" Dec 02 10:31:41 crc kubenswrapper[4711]: I1202 10:31:41.098917 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd67557-1628-47e8-b608-720eca21e334" path="/var/lib/kubelet/pods/afd67557-1628-47e8-b608-720eca21e334/volumes" Dec 02 10:31:41 crc kubenswrapper[4711]: I1202 10:31:41.208042 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8588\" (UniqueName: \"kubernetes.io/projected/006b5b7a-0ef9-442a-9e52-462f5ef784ee-kube-api-access-g8588\") pod \"006b5b7a-0ef9-442a-9e52-462f5ef784ee\" (UID: \"006b5b7a-0ef9-442a-9e52-462f5ef784ee\") " Dec 02 10:31:41 crc kubenswrapper[4711]: I1202 10:31:41.208239 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006b5b7a-0ef9-442a-9e52-462f5ef784ee-operator-scripts\") pod \"006b5b7a-0ef9-442a-9e52-462f5ef784ee\" (UID: \"006b5b7a-0ef9-442a-9e52-462f5ef784ee\") " Dec 02 10:31:41 crc kubenswrapper[4711]: I1202 10:31:41.209067 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/006b5b7a-0ef9-442a-9e52-462f5ef784ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "006b5b7a-0ef9-442a-9e52-462f5ef784ee" (UID: "006b5b7a-0ef9-442a-9e52-462f5ef784ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:41 crc kubenswrapper[4711]: I1202 10:31:41.213293 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006b5b7a-0ef9-442a-9e52-462f5ef784ee-kube-api-access-g8588" (OuterVolumeSpecName: "kube-api-access-g8588") pod "006b5b7a-0ef9-442a-9e52-462f5ef784ee" (UID: "006b5b7a-0ef9-442a-9e52-462f5ef784ee"). InnerVolumeSpecName "kube-api-access-g8588". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:41 crc kubenswrapper[4711]: I1202 10:31:41.310396 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8588\" (UniqueName: \"kubernetes.io/projected/006b5b7a-0ef9-442a-9e52-462f5ef784ee-kube-api-access-g8588\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:41 crc kubenswrapper[4711]: I1202 10:31:41.310465 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006b5b7a-0ef9-442a-9e52-462f5ef784ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:41 crc kubenswrapper[4711]: I1202 10:31:41.619021 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9162-account-create-update-vnj4b" event={"ID":"006b5b7a-0ef9-442a-9e52-462f5ef784ee","Type":"ContainerDied","Data":"885a09e0a2045dc7dd124d7652eedd3b46b67bb794219a6ef5979a011dad3738"} Dec 02 10:31:41 crc kubenswrapper[4711]: I1202 10:31:41.619145 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9162-account-create-update-vnj4b" Dec 02 10:31:41 crc kubenswrapper[4711]: I1202 10:31:41.619169 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="885a09e0a2045dc7dd124d7652eedd3b46b67bb794219a6ef5979a011dad3738" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.049272 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d47a-account-create-update-r7rzm" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.062899 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.127820 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8-operator-scripts\") pod \"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8\" (UID: \"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8\") " Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.127889 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntnx5\" (UniqueName: \"kubernetes.io/projected/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8-kube-api-access-ntnx5\") pod \"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8\" (UID: \"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8\") " Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.128579 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08062a70-e0c0-4bd0-b8e0-ab0a85d486a8" (UID: "08062a70-e0c0-4bd0-b8e0-ab0a85d486a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.131838 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8-kube-api-access-ntnx5" (OuterVolumeSpecName: "kube-api-access-ntnx5") pod "08062a70-e0c0-4bd0-b8e0-ab0a85d486a8" (UID: "08062a70-e0c0-4bd0-b8e0-ab0a85d486a8"). InnerVolumeSpecName "kube-api-access-ntnx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.191917 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fv2p5" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.201861 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vjxm2" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.230355 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.230384 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntnx5\" (UniqueName: \"kubernetes.io/projected/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8-kube-api-access-ntnx5\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.332020 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9486b59-2bf0-492f-84e1-0a832e7b366c-operator-scripts\") pod \"b9486b59-2bf0-492f-84e1-0a832e7b366c\" (UID: \"b9486b59-2bf0-492f-84e1-0a832e7b366c\") " Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.332137 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4vzc\" (UniqueName: \"kubernetes.io/projected/b9486b59-2bf0-492f-84e1-0a832e7b366c-kube-api-access-h4vzc\") pod \"b9486b59-2bf0-492f-84e1-0a832e7b366c\" (UID: \"b9486b59-2bf0-492f-84e1-0a832e7b366c\") " Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.332180 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f13da34c-a52d-4dde-8514-0ddb2cac7f4c-operator-scripts\") pod \"f13da34c-a52d-4dde-8514-0ddb2cac7f4c\" (UID: \"f13da34c-a52d-4dde-8514-0ddb2cac7f4c\") " Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.332254 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f9tb\" (UniqueName: \"kubernetes.io/projected/f13da34c-a52d-4dde-8514-0ddb2cac7f4c-kube-api-access-8f9tb\") pod \"f13da34c-a52d-4dde-8514-0ddb2cac7f4c\" (UID: \"f13da34c-a52d-4dde-8514-0ddb2cac7f4c\") " Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.332677 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9486b59-2bf0-492f-84e1-0a832e7b366c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9486b59-2bf0-492f-84e1-0a832e7b366c" (UID: "b9486b59-2bf0-492f-84e1-0a832e7b366c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.333036 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f13da34c-a52d-4dde-8514-0ddb2cac7f4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f13da34c-a52d-4dde-8514-0ddb2cac7f4c" (UID: "f13da34c-a52d-4dde-8514-0ddb2cac7f4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.335554 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f13da34c-a52d-4dde-8514-0ddb2cac7f4c-kube-api-access-8f9tb" (OuterVolumeSpecName: "kube-api-access-8f9tb") pod "f13da34c-a52d-4dde-8514-0ddb2cac7f4c" (UID: "f13da34c-a52d-4dde-8514-0ddb2cac7f4c"). InnerVolumeSpecName "kube-api-access-8f9tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.336045 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9486b59-2bf0-492f-84e1-0a832e7b366c-kube-api-access-h4vzc" (OuterVolumeSpecName: "kube-api-access-h4vzc") pod "b9486b59-2bf0-492f-84e1-0a832e7b366c" (UID: "b9486b59-2bf0-492f-84e1-0a832e7b366c"). InnerVolumeSpecName "kube-api-access-h4vzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.434576 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9486b59-2bf0-492f-84e1-0a832e7b366c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.434622 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4vzc\" (UniqueName: \"kubernetes.io/projected/b9486b59-2bf0-492f-84e1-0a832e7b366c-kube-api-access-h4vzc\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.434792 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f13da34c-a52d-4dde-8514-0ddb2cac7f4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.434807 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f9tb\" (UniqueName: \"kubernetes.io/projected/f13da34c-a52d-4dde-8514-0ddb2cac7f4c-kube-api-access-8f9tb\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.631212 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fv2p5" event={"ID":"b9486b59-2bf0-492f-84e1-0a832e7b366c","Type":"ContainerDied","Data":"ede6322f581ab903945693228b11265fd3d4f3c8b4b0b2f8b46f3879a0e95581"} Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.631307 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ede6322f581ab903945693228b11265fd3d4f3c8b4b0b2f8b46f3879a0e95581" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.631359 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fv2p5" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.633193 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vjxm2" event={"ID":"f13da34c-a52d-4dde-8514-0ddb2cac7f4c","Type":"ContainerDied","Data":"beb11d88b60c8bd937ca668738f610ac12d92552d6d38ce790f3565af648af87"} Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.633225 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb11d88b60c8bd937ca668738f610ac12d92552d6d38ce790f3565af648af87" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.633438 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vjxm2" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.634584 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d47a-account-create-update-r7rzm" event={"ID":"08062a70-e0c0-4bd0-b8e0-ab0a85d486a8","Type":"ContainerDied","Data":"509593e2924f7b0eee19e04c6e4a4de0829d3977974508385936305f2db59248"} Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.634608 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="509593e2924f7b0eee19e04c6e4a4de0829d3977974508385936305f2db59248" Dec 02 10:31:42 crc kubenswrapper[4711]: I1202 10:31:42.634686 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d47a-account-create-update-r7rzm" Dec 02 10:31:45 crc kubenswrapper[4711]: I1202 10:31:45.080443 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:31:45 crc kubenswrapper[4711]: E1202 10:31:45.080833 4711 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 10:31:45 crc kubenswrapper[4711]: E1202 10:31:45.080927 4711 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 10:31:45 crc kubenswrapper[4711]: E1202 10:31:45.081100 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift podName:23030cd9-0bb2-4574-8c49-405bef4719b5 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:01.081048908 +0000 UTC m=+1110.790415395 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift") pod "swift-storage-0" (UID: "23030cd9-0bb2-4574-8c49-405bef4719b5") : configmap "swift-ring-files" not found Dec 02 10:31:45 crc kubenswrapper[4711]: I1202 10:31:45.664053 4711 generic.go:334] "Generic (PLEG): container finished" podID="848cb525-39ab-47d7-99fc-9fbc249e740a" containerID="5ddbda90f04a7235341ce17c331945f40965dee7a228a39d4daaf502ee924175" exitCode=0 Dec 02 10:31:45 crc kubenswrapper[4711]: I1202 10:31:45.664137 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-58jtp" event={"ID":"848cb525-39ab-47d7-99fc-9fbc249e740a","Type":"ContainerDied","Data":"5ddbda90f04a7235341ce17c331945f40965dee7a228a39d4daaf502ee924175"} Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.406997 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-svtvp"] Dec 02 10:31:46 crc kubenswrapper[4711]: E1202 10:31:46.407669 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd67557-1628-47e8-b608-720eca21e334" containerName="dnsmasq-dns" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.407720 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd67557-1628-47e8-b608-720eca21e334" containerName="dnsmasq-dns" Dec 02 10:31:46 crc kubenswrapper[4711]: E1202 10:31:46.407751 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006b5b7a-0ef9-442a-9e52-462f5ef784ee" containerName="mariadb-account-create-update" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.407761 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="006b5b7a-0ef9-442a-9e52-462f5ef784ee" containerName="mariadb-account-create-update" Dec 02 10:31:46 crc kubenswrapper[4711]: E1202 10:31:46.407798 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e044e2-2341-4cf1-8669-9beba7eec45c" containerName="mariadb-account-create-update" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.407807 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e044e2-2341-4cf1-8669-9beba7eec45c" containerName="mariadb-account-create-update" Dec 02 10:31:46 crc kubenswrapper[4711]: E1202 10:31:46.407822 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9486b59-2bf0-492f-84e1-0a832e7b366c" containerName="mariadb-database-create" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.407830 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9486b59-2bf0-492f-84e1-0a832e7b366c" containerName="mariadb-database-create" Dec 02 10:31:46 crc kubenswrapper[4711]: E1202 10:31:46.407853 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f13da34c-a52d-4dde-8514-0ddb2cac7f4c" containerName="mariadb-database-create" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.407883 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f13da34c-a52d-4dde-8514-0ddb2cac7f4c" containerName="mariadb-database-create" Dec 02 10:31:46 crc kubenswrapper[4711]: E1202 10:31:46.407897 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e039b8-78fb-43a0-9ab0-7d3a6dc43198" containerName="mariadb-database-create" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.407905 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e039b8-78fb-43a0-9ab0-7d3a6dc43198" containerName="mariadb-database-create" Dec 02 10:31:46 crc kubenswrapper[4711]: E1202 10:31:46.407917 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd67557-1628-47e8-b608-720eca21e334" containerName="init" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.407924 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd67557-1628-47e8-b608-720eca21e334" containerName="init" Dec 02 10:31:46 crc kubenswrapper[4711]: E1202 10:31:46.407983 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08062a70-e0c0-4bd0-b8e0-ab0a85d486a8" containerName="mariadb-account-create-update" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.407993 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="08062a70-e0c0-4bd0-b8e0-ab0a85d486a8" containerName="mariadb-account-create-update" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.408255 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd67557-1628-47e8-b608-720eca21e334" containerName="dnsmasq-dns" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.408304 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="08062a70-e0c0-4bd0-b8e0-ab0a85d486a8" containerName="mariadb-account-create-update" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.408322 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9486b59-2bf0-492f-84e1-0a832e7b366c" containerName="mariadb-database-create" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.408332 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e044e2-2341-4cf1-8669-9beba7eec45c" containerName="mariadb-account-create-update" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.408344 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e039b8-78fb-43a0-9ab0-7d3a6dc43198" containerName="mariadb-database-create" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.408371 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="006b5b7a-0ef9-442a-9e52-462f5ef784ee" containerName="mariadb-account-create-update" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.408385 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="f13da34c-a52d-4dde-8514-0ddb2cac7f4c" containerName="mariadb-database-create" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.409708 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.413724 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.417740 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qcf8c" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.420688 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-svtvp"] Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.507564 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-db-sync-config-data\") pod \"glance-db-sync-svtvp\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.507689 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz7wb\" (UniqueName: \"kubernetes.io/projected/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-kube-api-access-wz7wb\") pod \"glance-db-sync-svtvp\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.507803 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-combined-ca-bundle\") pod \"glance-db-sync-svtvp\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.507839 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-config-data\") pod \"glance-db-sync-svtvp\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.608889 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-db-sync-config-data\") pod \"glance-db-sync-svtvp\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.609036 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz7wb\" (UniqueName: \"kubernetes.io/projected/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-kube-api-access-wz7wb\") pod \"glance-db-sync-svtvp\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.609144 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-combined-ca-bundle\") pod \"glance-db-sync-svtvp\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.609178 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-config-data\") pod \"glance-db-sync-svtvp\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.615430 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-db-sync-config-data\") pod \"glance-db-sync-svtvp\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.616306 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-config-data\") pod \"glance-db-sync-svtvp\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.616625 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-combined-ca-bundle\") pod \"glance-db-sync-svtvp\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.638740 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz7wb\" (UniqueName: \"kubernetes.io/projected/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-kube-api-access-wz7wb\") pod \"glance-db-sync-svtvp\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:46 crc kubenswrapper[4711]: I1202 10:31:46.734131 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-svtvp" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.033228 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.120128 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/848cb525-39ab-47d7-99fc-9fbc249e740a-etc-swift\") pod \"848cb525-39ab-47d7-99fc-9fbc249e740a\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.120180 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-swiftconf\") pod \"848cb525-39ab-47d7-99fc-9fbc249e740a\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.120198 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/848cb525-39ab-47d7-99fc-9fbc249e740a-scripts\") pod \"848cb525-39ab-47d7-99fc-9fbc249e740a\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.120224 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb9mx\" (UniqueName: \"kubernetes.io/projected/848cb525-39ab-47d7-99fc-9fbc249e740a-kube-api-access-bb9mx\") pod \"848cb525-39ab-47d7-99fc-9fbc249e740a\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.120330 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/848cb525-39ab-47d7-99fc-9fbc249e740a-ring-data-devices\") pod \"848cb525-39ab-47d7-99fc-9fbc249e740a\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.120358 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-combined-ca-bundle\") pod \"848cb525-39ab-47d7-99fc-9fbc249e740a\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.120375 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-dispersionconf\") pod \"848cb525-39ab-47d7-99fc-9fbc249e740a\" (UID: \"848cb525-39ab-47d7-99fc-9fbc249e740a\") " Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.121180 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848cb525-39ab-47d7-99fc-9fbc249e740a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "848cb525-39ab-47d7-99fc-9fbc249e740a" (UID: "848cb525-39ab-47d7-99fc-9fbc249e740a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.123691 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848cb525-39ab-47d7-99fc-9fbc249e740a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "848cb525-39ab-47d7-99fc-9fbc249e740a" (UID: "848cb525-39ab-47d7-99fc-9fbc249e740a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.127717 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "848cb525-39ab-47d7-99fc-9fbc249e740a" (UID: "848cb525-39ab-47d7-99fc-9fbc249e740a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.128666 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848cb525-39ab-47d7-99fc-9fbc249e740a-kube-api-access-bb9mx" (OuterVolumeSpecName: "kube-api-access-bb9mx") pod "848cb525-39ab-47d7-99fc-9fbc249e740a" (UID: "848cb525-39ab-47d7-99fc-9fbc249e740a"). InnerVolumeSpecName "kube-api-access-bb9mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.141831 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "848cb525-39ab-47d7-99fc-9fbc249e740a" (UID: "848cb525-39ab-47d7-99fc-9fbc249e740a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.142071 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848cb525-39ab-47d7-99fc-9fbc249e740a-scripts" (OuterVolumeSpecName: "scripts") pod "848cb525-39ab-47d7-99fc-9fbc249e740a" (UID: "848cb525-39ab-47d7-99fc-9fbc249e740a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.145141 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "848cb525-39ab-47d7-99fc-9fbc249e740a" (UID: "848cb525-39ab-47d7-99fc-9fbc249e740a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.222534 4711 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/848cb525-39ab-47d7-99fc-9fbc249e740a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.222588 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.222597 4711 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.222617 4711 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/848cb525-39ab-47d7-99fc-9fbc249e740a-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.222627 4711 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/848cb525-39ab-47d7-99fc-9fbc249e740a-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.222637 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/848cb525-39ab-47d7-99fc-9fbc249e740a-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.222662 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb9mx\" (UniqueName: \"kubernetes.io/projected/848cb525-39ab-47d7-99fc-9fbc249e740a-kube-api-access-bb9mx\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:47 crc kubenswrapper[4711]: W1202 10:31:47.428007 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b94dbaa_33c0_42b0_b71a_9af5fda1a876.slice/crio-35570640b8337029e1cfdf46524f8287594d6d5056869d864413979ceb952596 WatchSource:0}: Error finding container 35570640b8337029e1cfdf46524f8287594d6d5056869d864413979ceb952596: Status 404 returned error can't find the container with id 35570640b8337029e1cfdf46524f8287594d6d5056869d864413979ceb952596 Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.429526 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-svtvp"] Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.679863 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-svtvp" event={"ID":"2b94dbaa-33c0-42b0-b71a-9af5fda1a876","Type":"ContainerStarted","Data":"35570640b8337029e1cfdf46524f8287594d6d5056869d864413979ceb952596"} Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.681674 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-58jtp" event={"ID":"848cb525-39ab-47d7-99fc-9fbc249e740a","Type":"ContainerDied","Data":"438282a1d2d92fb312fdd2297ecd5cc238c25d0bcf72200bb34474bc6e815be1"} Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.681716 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="438282a1d2d92fb312fdd2297ecd5cc238c25d0bcf72200bb34474bc6e815be1" Dec 02 10:31:47 crc kubenswrapper[4711]: I1202 10:31:47.681773 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-58jtp" Dec 02 10:31:49 crc kubenswrapper[4711]: I1202 10:31:49.705357 4711 generic.go:334] "Generic (PLEG): container finished" podID="cdbcea35-5752-4be6-a7db-0f3aa362be58" containerID="8cff31e19396fc956bdae49bc1df5c315bf83e71499fff0a4ffd8e4bd7158fb7" exitCode=0 Dec 02 10:31:49 crc kubenswrapper[4711]: I1202 10:31:49.705417 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cdbcea35-5752-4be6-a7db-0f3aa362be58","Type":"ContainerDied","Data":"8cff31e19396fc956bdae49bc1df5c315bf83e71499fff0a4ffd8e4bd7158fb7"} Dec 02 10:31:50 crc kubenswrapper[4711]: I1202 10:31:50.716698 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cdbcea35-5752-4be6-a7db-0f3aa362be58","Type":"ContainerStarted","Data":"2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898"} Dec 02 10:31:50 crc kubenswrapper[4711]: I1202 10:31:50.718397 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:31:50 crc kubenswrapper[4711]: I1202 10:31:50.720194 4711 generic.go:334] "Generic (PLEG): container finished" podID="3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" containerID="59b11b593094b689e9ee147410d78c22e9f6fc07d27944727720d80cb5f1e8c4" exitCode=0 Dec 02 10:31:50 crc kubenswrapper[4711]: I1202 10:31:50.720245 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29","Type":"ContainerDied","Data":"59b11b593094b689e9ee147410d78c22e9f6fc07d27944727720d80cb5f1e8c4"} Dec 02 10:31:50 crc kubenswrapper[4711]: I1202 10:31:50.787828 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.286386037 podStartE2EDuration="59.787778929s" podCreationTimestamp="2025-12-02 10:30:51 +0000 UTC" firstStartedPulling="2025-12-02 10:31:07.275225357 +0000 UTC m=+1056.984591804" lastFinishedPulling="2025-12-02 10:31:15.776618239 +0000 UTC m=+1065.485984696" observedRunningTime="2025-12-02 10:31:50.75289955 +0000 UTC m=+1100.462266057" watchObservedRunningTime="2025-12-02 10:31:50.787778929 +0000 UTC m=+1100.497145376" Dec 02 10:31:51 crc kubenswrapper[4711]: I1202 10:31:51.732689 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29","Type":"ContainerStarted","Data":"245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba"} Dec 02 10:31:51 crc kubenswrapper[4711]: I1202 10:31:51.733025 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 02 10:31:51 crc kubenswrapper[4711]: I1202 10:31:51.765349 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.549682144 podStartE2EDuration="1m0.765328585s" podCreationTimestamp="2025-12-02 10:30:51 +0000 UTC" firstStartedPulling="2025-12-02 10:31:05.305920915 +0000 UTC m=+1055.015287362" lastFinishedPulling="2025-12-02 10:31:15.521567356 +0000 UTC m=+1065.230933803" observedRunningTime="2025-12-02 10:31:51.763785292 +0000 UTC m=+1101.473151779" watchObservedRunningTime="2025-12-02 10:31:51.765328585 +0000 UTC m=+1101.474695032" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.560042 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q57lb" podUID="7ce53b33-b78a-446d-b345-c8d918209ddf" containerName="ovn-controller" probeResult="failure" output=< Dec 02 10:31:52 crc kubenswrapper[4711]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 02 10:31:52 crc kubenswrapper[4711]: > Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.584349 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.594519 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lxtbd" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.852473 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q57lb-config-pskps"] Dec 02 10:31:52 crc kubenswrapper[4711]: E1202 10:31:52.853260 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848cb525-39ab-47d7-99fc-9fbc249e740a" containerName="swift-ring-rebalance" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.853294 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="848cb525-39ab-47d7-99fc-9fbc249e740a" containerName="swift-ring-rebalance" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.853539 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="848cb525-39ab-47d7-99fc-9fbc249e740a" containerName="swift-ring-rebalance" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.854444 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.856883 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.869536 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q57lb-config-pskps"] Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.946990 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-run-ovn\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.947067 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f116971-a4cd-4f9c-b722-f5b00f630956-scripts\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.947108 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-log-ovn\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.947170 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgmz7\" (UniqueName: \"kubernetes.io/projected/7f116971-a4cd-4f9c-b722-f5b00f630956-kube-api-access-pgmz7\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.947190 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-run\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:52 crc kubenswrapper[4711]: I1202 10:31:52.947282 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7f116971-a4cd-4f9c-b722-f5b00f630956-additional-scripts\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:53 crc kubenswrapper[4711]: I1202 10:31:53.049667 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7f116971-a4cd-4f9c-b722-f5b00f630956-additional-scripts\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:53 crc kubenswrapper[4711]: I1202 10:31:53.049785 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-run-ovn\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:53 crc kubenswrapper[4711]: I1202 10:31:53.049814 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f116971-a4cd-4f9c-b722-f5b00f630956-scripts\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:53 crc kubenswrapper[4711]: I1202 10:31:53.049849 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-log-ovn\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:53 crc kubenswrapper[4711]: I1202 10:31:53.049908 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgmz7\" (UniqueName: \"kubernetes.io/projected/7f116971-a4cd-4f9c-b722-f5b00f630956-kube-api-access-pgmz7\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:53 crc kubenswrapper[4711]: I1202 10:31:53.049931 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-run\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:53 crc kubenswrapper[4711]: I1202 10:31:53.050415 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-run\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:53 crc kubenswrapper[4711]: I1202 10:31:53.050486 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-run-ovn\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:53 crc kubenswrapper[4711]: I1202 10:31:53.050604 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7f116971-a4cd-4f9c-b722-f5b00f630956-additional-scripts\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:53 crc kubenswrapper[4711]: I1202 10:31:53.050719 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-log-ovn\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:53 crc kubenswrapper[4711]: I1202 10:31:53.073715 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgmz7\" (UniqueName: \"kubernetes.io/projected/7f116971-a4cd-4f9c-b722-f5b00f630956-kube-api-access-pgmz7\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:55 crc kubenswrapper[4711]: I1202 10:31:55.483013 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f116971-a4cd-4f9c-b722-f5b00f630956-scripts\") pod \"ovn-controller-q57lb-config-pskps\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:55 crc kubenswrapper[4711]: I1202 10:31:55.576592 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:31:57 crc kubenswrapper[4711]: I1202 10:31:57.574540 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q57lb" podUID="7ce53b33-b78a-446d-b345-c8d918209ddf" containerName="ovn-controller" probeResult="failure" output=< Dec 02 10:31:57 crc kubenswrapper[4711]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 02 10:31:57 crc kubenswrapper[4711]: > Dec 02 10:32:01 crc kubenswrapper[4711]: I1202 10:32:01.083522 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:32:01 crc kubenswrapper[4711]: I1202 10:32:01.094476 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/23030cd9-0bb2-4574-8c49-405bef4719b5-etc-swift\") pod \"swift-storage-0\" (UID: \"23030cd9-0bb2-4574-8c49-405bef4719b5\") " pod="openstack/swift-storage-0" Dec 02 10:32:01 crc kubenswrapper[4711]: I1202 10:32:01.268590 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.322687 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q57lb-config-pskps"] Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.420333 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 02 10:32:02 crc kubenswrapper[4711]: W1202 10:32:02.429547 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23030cd9_0bb2_4574_8c49_405bef4719b5.slice/crio-598be64a56d7676726cf61a1dc4618c241e6beec1bee5de29afebf56162ae307 WatchSource:0}: Error finding container 598be64a56d7676726cf61a1dc4618c241e6beec1bee5de29afebf56162ae307: Status 404 returned error can't find the container with id 598be64a56d7676726cf61a1dc4618c241e6beec1bee5de29afebf56162ae307 Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.511211 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.620139 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q57lb" podUID="7ce53b33-b78a-446d-b345-c8d918209ddf" containerName="ovn-controller" probeResult="failure" output=< Dec 02 10:32:02 crc kubenswrapper[4711]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 02 10:32:02 crc kubenswrapper[4711]: > Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.778902 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-vc295"] Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.780495 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vc295" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.801164 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vc295"] Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.825640 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-svtvp" event={"ID":"2b94dbaa-33c0-42b0-b71a-9af5fda1a876","Type":"ContainerStarted","Data":"cb3977ccc770ae471abbd9e993cb93fda494ef6e15702b11add2b16bec094fed"} Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.833462 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"598be64a56d7676726cf61a1dc4618c241e6beec1bee5de29afebf56162ae307"} Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.840776 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q57lb-config-pskps" event={"ID":"7f116971-a4cd-4f9c-b722-f5b00f630956","Type":"ContainerStarted","Data":"e632f4ed340d82d415adfab0ed398b9b08cb10d0068e010bc2cdc241638edf3d"} Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.840821 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q57lb-config-pskps" event={"ID":"7f116971-a4cd-4f9c-b722-f5b00f630956","Type":"ContainerStarted","Data":"5be12818cb41241f1ba57100dc3ecb129efafbf24ccf5655420b67a86246c9c7"} Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.859201 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-svtvp" podStartSLOduration=2.356744447 podStartE2EDuration="16.859182991s" podCreationTimestamp="2025-12-02 10:31:46 +0000 UTC" firstStartedPulling="2025-12-02 10:31:47.432084934 +0000 UTC m=+1097.141451401" lastFinishedPulling="2025-12-02 10:32:01.934523498 +0000 UTC m=+1111.643889945" observedRunningTime="2025-12-02 10:32:02.854207686 +0000 UTC m=+1112.563574143" watchObservedRunningTime="2025-12-02 10:32:02.859182991 +0000 UTC m=+1112.568549438" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.870991 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jbrrj"] Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.872028 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jbrrj" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.879983 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-dcca-account-create-update-w2cd7"] Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.881875 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dcca-account-create-update-w2cd7" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.883558 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.905299 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jbrrj"] Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.906872 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.911490 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-dcca-account-create-update-w2cd7"] Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.912927 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-q57lb-config-pskps" podStartSLOduration=10.912907711999999 podStartE2EDuration="10.912907712s" podCreationTimestamp="2025-12-02 10:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:32:02.885017104 +0000 UTC m=+1112.594383561" watchObservedRunningTime="2025-12-02 10:32:02.912907712 +0000 UTC m=+1112.622274159" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.919672 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j79rc\" (UniqueName: \"kubernetes.io/projected/366b0e56-2601-4ae2-90be-958339d5bde1-kube-api-access-j79rc\") pod \"cinder-db-create-vc295\" (UID: \"366b0e56-2601-4ae2-90be-958339d5bde1\") " pod="openstack/cinder-db-create-vc295" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.919725 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366b0e56-2601-4ae2-90be-958339d5bde1-operator-scripts\") pod \"cinder-db-create-vc295\" (UID: \"366b0e56-2601-4ae2-90be-958339d5bde1\") " pod="openstack/cinder-db-create-vc295" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.983289 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9019-account-create-update-znhxs"] Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.984493 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9019-account-create-update-znhxs" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.986695 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 02 10:32:02 crc kubenswrapper[4711]: I1202 10:32:02.997620 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9019-account-create-update-znhxs"] Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.021165 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5t2c\" (UniqueName: \"kubernetes.io/projected/91b4e096-f633-4842-a5e1-9cc10c99ff50-kube-api-access-v5t2c\") pod \"barbican-db-create-jbrrj\" (UID: \"91b4e096-f633-4842-a5e1-9cc10c99ff50\") " pod="openstack/barbican-db-create-jbrrj" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.021227 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e-operator-scripts\") pod \"barbican-dcca-account-create-update-w2cd7\" (UID: \"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e\") " pod="openstack/barbican-dcca-account-create-update-w2cd7" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.021269 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjm9k\" (UniqueName: \"kubernetes.io/projected/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e-kube-api-access-jjm9k\") pod \"barbican-dcca-account-create-update-w2cd7\" (UID: \"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e\") " pod="openstack/barbican-dcca-account-create-update-w2cd7" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.021291 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j79rc\" (UniqueName: \"kubernetes.io/projected/366b0e56-2601-4ae2-90be-958339d5bde1-kube-api-access-j79rc\") pod \"cinder-db-create-vc295\" (UID: \"366b0e56-2601-4ae2-90be-958339d5bde1\") " pod="openstack/cinder-db-create-vc295" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.021321 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366b0e56-2601-4ae2-90be-958339d5bde1-operator-scripts\") pod \"cinder-db-create-vc295\" (UID: \"366b0e56-2601-4ae2-90be-958339d5bde1\") " pod="openstack/cinder-db-create-vc295" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.021384 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b4e096-f633-4842-a5e1-9cc10c99ff50-operator-scripts\") pod \"barbican-db-create-jbrrj\" (UID: \"91b4e096-f633-4842-a5e1-9cc10c99ff50\") " pod="openstack/barbican-db-create-jbrrj" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.023270 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366b0e56-2601-4ae2-90be-958339d5bde1-operator-scripts\") pod \"cinder-db-create-vc295\" (UID: \"366b0e56-2601-4ae2-90be-958339d5bde1\") " pod="openstack/cinder-db-create-vc295" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.058518 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j79rc\" (UniqueName: \"kubernetes.io/projected/366b0e56-2601-4ae2-90be-958339d5bde1-kube-api-access-j79rc\") pod \"cinder-db-create-vc295\" (UID: \"366b0e56-2601-4ae2-90be-958339d5bde1\") " pod="openstack/cinder-db-create-vc295" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.099937 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vc295" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.122841 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b4e096-f633-4842-a5e1-9cc10c99ff50-operator-scripts\") pod \"barbican-db-create-jbrrj\" (UID: \"91b4e096-f633-4842-a5e1-9cc10c99ff50\") " pod="openstack/barbican-db-create-jbrrj" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.123229 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a961b46e-6c27-4361-8dbd-7cc28d6b2a32-operator-scripts\") pod \"cinder-9019-account-create-update-znhxs\" (UID: \"a961b46e-6c27-4361-8dbd-7cc28d6b2a32\") " pod="openstack/cinder-9019-account-create-update-znhxs" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.123403 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jsqn\" (UniqueName: \"kubernetes.io/projected/a961b46e-6c27-4361-8dbd-7cc28d6b2a32-kube-api-access-8jsqn\") pod \"cinder-9019-account-create-update-znhxs\" (UID: \"a961b46e-6c27-4361-8dbd-7cc28d6b2a32\") " pod="openstack/cinder-9019-account-create-update-znhxs" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.123527 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5t2c\" (UniqueName: \"kubernetes.io/projected/91b4e096-f633-4842-a5e1-9cc10c99ff50-kube-api-access-v5t2c\") pod \"barbican-db-create-jbrrj\" (UID: \"91b4e096-f633-4842-a5e1-9cc10c99ff50\") " pod="openstack/barbican-db-create-jbrrj" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.123651 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e-operator-scripts\") pod \"barbican-dcca-account-create-update-w2cd7\" (UID: \"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e\") " pod="openstack/barbican-dcca-account-create-update-w2cd7" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.123735 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjm9k\" (UniqueName: \"kubernetes.io/projected/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e-kube-api-access-jjm9k\") pod \"barbican-dcca-account-create-update-w2cd7\" (UID: \"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e\") " pod="openstack/barbican-dcca-account-create-update-w2cd7" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.123660 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b4e096-f633-4842-a5e1-9cc10c99ff50-operator-scripts\") pod \"barbican-db-create-jbrrj\" (UID: \"91b4e096-f633-4842-a5e1-9cc10c99ff50\") " pod="openstack/barbican-db-create-jbrrj" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.124481 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e-operator-scripts\") pod \"barbican-dcca-account-create-update-w2cd7\" (UID: \"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e\") " pod="openstack/barbican-dcca-account-create-update-w2cd7" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.167374 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5t2c\" (UniqueName: \"kubernetes.io/projected/91b4e096-f633-4842-a5e1-9cc10c99ff50-kube-api-access-v5t2c\") pod \"barbican-db-create-jbrrj\" (UID: \"91b4e096-f633-4842-a5e1-9cc10c99ff50\") " pod="openstack/barbican-db-create-jbrrj" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.175503 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjm9k\" (UniqueName: \"kubernetes.io/projected/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e-kube-api-access-jjm9k\") pod \"barbican-dcca-account-create-update-w2cd7\" (UID: \"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e\") " pod="openstack/barbican-dcca-account-create-update-w2cd7" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.186113 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jbrrj" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.201501 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dcca-account-create-update-w2cd7" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.208297 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-s2r4g"] Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.209314 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.220532 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.220712 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.220812 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6k6c6" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.220904 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.226064 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a961b46e-6c27-4361-8dbd-7cc28d6b2a32-operator-scripts\") pod \"cinder-9019-account-create-update-znhxs\" (UID: \"a961b46e-6c27-4361-8dbd-7cc28d6b2a32\") " pod="openstack/cinder-9019-account-create-update-znhxs" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.226122 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jsqn\" (UniqueName: \"kubernetes.io/projected/a961b46e-6c27-4361-8dbd-7cc28d6b2a32-kube-api-access-8jsqn\") pod \"cinder-9019-account-create-update-znhxs\" (UID: \"a961b46e-6c27-4361-8dbd-7cc28d6b2a32\") " pod="openstack/cinder-9019-account-create-update-znhxs" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.227108 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a961b46e-6c27-4361-8dbd-7cc28d6b2a32-operator-scripts\") pod \"cinder-9019-account-create-update-znhxs\" (UID: \"a961b46e-6c27-4361-8dbd-7cc28d6b2a32\") " pod="openstack/cinder-9019-account-create-update-znhxs" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.265283 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ng955"] Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.277707 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ng955" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.283588 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jsqn\" (UniqueName: \"kubernetes.io/projected/a961b46e-6c27-4361-8dbd-7cc28d6b2a32-kube-api-access-8jsqn\") pod \"cinder-9019-account-create-update-znhxs\" (UID: \"a961b46e-6c27-4361-8dbd-7cc28d6b2a32\") " pod="openstack/cinder-9019-account-create-update-znhxs" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.298780 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9019-account-create-update-znhxs" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.306002 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-s2r4g"] Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.335411 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ce0662-b1aa-405b-a577-ebfd14385735-config-data\") pod \"keystone-db-sync-s2r4g\" (UID: \"12ce0662-b1aa-405b-a577-ebfd14385735\") " pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.336470 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ce0662-b1aa-405b-a577-ebfd14385735-combined-ca-bundle\") pod \"keystone-db-sync-s2r4g\" (UID: \"12ce0662-b1aa-405b-a577-ebfd14385735\") " pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.336537 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vkzv\" (UniqueName: \"kubernetes.io/projected/12ce0662-b1aa-405b-a577-ebfd14385735-kube-api-access-9vkzv\") pod \"keystone-db-sync-s2r4g\" (UID: \"12ce0662-b1aa-405b-a577-ebfd14385735\") " pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.364764 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ng955"] Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.406746 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ab0c-account-create-update-cwnxk"] Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.408228 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ab0c-account-create-update-cwnxk" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.411636 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.424737 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ab0c-account-create-update-cwnxk"] Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.438734 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ce0662-b1aa-405b-a577-ebfd14385735-config-data\") pod \"keystone-db-sync-s2r4g\" (UID: \"12ce0662-b1aa-405b-a577-ebfd14385735\") " pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.438798 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ce0662-b1aa-405b-a577-ebfd14385735-combined-ca-bundle\") pod \"keystone-db-sync-s2r4g\" (UID: \"12ce0662-b1aa-405b-a577-ebfd14385735\") " pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.438865 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vkzv\" (UniqueName: \"kubernetes.io/projected/12ce0662-b1aa-405b-a577-ebfd14385735-kube-api-access-9vkzv\") pod \"keystone-db-sync-s2r4g\" (UID: \"12ce0662-b1aa-405b-a577-ebfd14385735\") " pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.439298 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzppc\" (UniqueName: \"kubernetes.io/projected/ffdcca1e-8be0-4069-888b-08a26ffaf8b0-kube-api-access-rzppc\") pod \"neutron-db-create-ng955\" (UID: \"ffdcca1e-8be0-4069-888b-08a26ffaf8b0\") " pod="openstack/neutron-db-create-ng955" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.439413 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffdcca1e-8be0-4069-888b-08a26ffaf8b0-operator-scripts\") pod \"neutron-db-create-ng955\" (UID: \"ffdcca1e-8be0-4069-888b-08a26ffaf8b0\") " pod="openstack/neutron-db-create-ng955" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.443977 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ce0662-b1aa-405b-a577-ebfd14385735-combined-ca-bundle\") pod \"keystone-db-sync-s2r4g\" (UID: \"12ce0662-b1aa-405b-a577-ebfd14385735\") " pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.451022 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ce0662-b1aa-405b-a577-ebfd14385735-config-data\") pod \"keystone-db-sync-s2r4g\" (UID: \"12ce0662-b1aa-405b-a577-ebfd14385735\") " pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.463686 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vkzv\" (UniqueName: \"kubernetes.io/projected/12ce0662-b1aa-405b-a577-ebfd14385735-kube-api-access-9vkzv\") pod \"keystone-db-sync-s2r4g\" (UID: \"12ce0662-b1aa-405b-a577-ebfd14385735\") " pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.542678 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4c5t\" (UniqueName: \"kubernetes.io/projected/3fe97d90-558c-4c53-bfe6-21b93c167ede-kube-api-access-q4c5t\") pod \"neutron-ab0c-account-create-update-cwnxk\" (UID: \"3fe97d90-558c-4c53-bfe6-21b93c167ede\") " pod="openstack/neutron-ab0c-account-create-update-cwnxk" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.542743 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fe97d90-558c-4c53-bfe6-21b93c167ede-operator-scripts\") pod \"neutron-ab0c-account-create-update-cwnxk\" (UID: \"3fe97d90-558c-4c53-bfe6-21b93c167ede\") " pod="openstack/neutron-ab0c-account-create-update-cwnxk" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.542809 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzppc\" (UniqueName: \"kubernetes.io/projected/ffdcca1e-8be0-4069-888b-08a26ffaf8b0-kube-api-access-rzppc\") pod \"neutron-db-create-ng955\" (UID: \"ffdcca1e-8be0-4069-888b-08a26ffaf8b0\") " pod="openstack/neutron-db-create-ng955" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.542856 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffdcca1e-8be0-4069-888b-08a26ffaf8b0-operator-scripts\") pod \"neutron-db-create-ng955\" (UID: \"ffdcca1e-8be0-4069-888b-08a26ffaf8b0\") " pod="openstack/neutron-db-create-ng955" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.543454 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffdcca1e-8be0-4069-888b-08a26ffaf8b0-operator-scripts\") pod \"neutron-db-create-ng955\" (UID: \"ffdcca1e-8be0-4069-888b-08a26ffaf8b0\") " pod="openstack/neutron-db-create-ng955" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.562318 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzppc\" (UniqueName: \"kubernetes.io/projected/ffdcca1e-8be0-4069-888b-08a26ffaf8b0-kube-api-access-rzppc\") pod \"neutron-db-create-ng955\" (UID: \"ffdcca1e-8be0-4069-888b-08a26ffaf8b0\") " pod="openstack/neutron-db-create-ng955" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.646442 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4c5t\" (UniqueName: \"kubernetes.io/projected/3fe97d90-558c-4c53-bfe6-21b93c167ede-kube-api-access-q4c5t\") pod \"neutron-ab0c-account-create-update-cwnxk\" (UID: \"3fe97d90-558c-4c53-bfe6-21b93c167ede\") " pod="openstack/neutron-ab0c-account-create-update-cwnxk" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.646750 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fe97d90-558c-4c53-bfe6-21b93c167ede-operator-scripts\") pod \"neutron-ab0c-account-create-update-cwnxk\" (UID: \"3fe97d90-558c-4c53-bfe6-21b93c167ede\") " pod="openstack/neutron-ab0c-account-create-update-cwnxk" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.647433 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fe97d90-558c-4c53-bfe6-21b93c167ede-operator-scripts\") pod \"neutron-ab0c-account-create-update-cwnxk\" (UID: \"3fe97d90-558c-4c53-bfe6-21b93c167ede\") " pod="openstack/neutron-ab0c-account-create-update-cwnxk" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.670615 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4c5t\" (UniqueName: \"kubernetes.io/projected/3fe97d90-558c-4c53-bfe6-21b93c167ede-kube-api-access-q4c5t\") pod \"neutron-ab0c-account-create-update-cwnxk\" (UID: \"3fe97d90-558c-4c53-bfe6-21b93c167ede\") " pod="openstack/neutron-ab0c-account-create-update-cwnxk" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.719547 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.731240 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ng955" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.762171 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ab0c-account-create-update-cwnxk" Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.823304 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vc295"] Dec 02 10:32:03 crc kubenswrapper[4711]: W1202 10:32:03.825876 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod366b0e56_2601_4ae2_90be_958339d5bde1.slice/crio-64f0ce125214a63081086ad0e63fc689485090cf2f2951f430e64c0ee505f28c WatchSource:0}: Error finding container 64f0ce125214a63081086ad0e63fc689485090cf2f2951f430e64c0ee505f28c: Status 404 returned error can't find the container with id 64f0ce125214a63081086ad0e63fc689485090cf2f2951f430e64c0ee505f28c Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.857576 4711 generic.go:334] "Generic (PLEG): container finished" podID="7f116971-a4cd-4f9c-b722-f5b00f630956" containerID="e632f4ed340d82d415adfab0ed398b9b08cb10d0068e010bc2cdc241638edf3d" exitCode=0 Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.857986 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q57lb-config-pskps" event={"ID":"7f116971-a4cd-4f9c-b722-f5b00f630956","Type":"ContainerDied","Data":"e632f4ed340d82d415adfab0ed398b9b08cb10d0068e010bc2cdc241638edf3d"} Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.864074 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vc295" event={"ID":"366b0e56-2601-4ae2-90be-958339d5bde1","Type":"ContainerStarted","Data":"64f0ce125214a63081086ad0e63fc689485090cf2f2951f430e64c0ee505f28c"} Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.932139 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jbrrj"] Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.941958 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9019-account-create-update-znhxs"] Dec 02 10:32:03 crc kubenswrapper[4711]: I1202 10:32:03.949217 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-dcca-account-create-update-w2cd7"] Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.270849 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-s2r4g"] Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.338932 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ng955"] Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.427099 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ab0c-account-create-update-cwnxk"] Dec 02 10:32:04 crc kubenswrapper[4711]: W1202 10:32:04.647284 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffdcca1e_8be0_4069_888b_08a26ffaf8b0.slice/crio-923583ea2cfed161f1139f511f73c22377a4b7b75386796c0655a75b0e84e87a WatchSource:0}: Error finding container 923583ea2cfed161f1139f511f73c22377a4b7b75386796c0655a75b0e84e87a: Status 404 returned error can't find the container with id 923583ea2cfed161f1139f511f73c22377a4b7b75386796c0655a75b0e84e87a Dec 02 10:32:04 crc kubenswrapper[4711]: W1202 10:32:04.650153 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fe97d90_558c_4c53_bfe6_21b93c167ede.slice/crio-29e51a46ccf642dc1b158ee8aecb706d65a9da503e5aa8479794b48cc2c3e781 WatchSource:0}: Error finding container 29e51a46ccf642dc1b158ee8aecb706d65a9da503e5aa8479794b48cc2c3e781: Status 404 returned error can't find the container with id 29e51a46ccf642dc1b158ee8aecb706d65a9da503e5aa8479794b48cc2c3e781 Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.873246 4711 generic.go:334] "Generic (PLEG): container finished" podID="91b4e096-f633-4842-a5e1-9cc10c99ff50" containerID="26b2b6bfb2c1e4df74b5657ee53508af2a187aea10cc17c99baf789fabdea733" exitCode=0 Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.873349 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jbrrj" event={"ID":"91b4e096-f633-4842-a5e1-9cc10c99ff50","Type":"ContainerDied","Data":"26b2b6bfb2c1e4df74b5657ee53508af2a187aea10cc17c99baf789fabdea733"} Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.873379 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jbrrj" event={"ID":"91b4e096-f633-4842-a5e1-9cc10c99ff50","Type":"ContainerStarted","Data":"56bcf2068e59f43cb10343d984a1f0778dfd9119fe24870d174121aa244bd06f"} Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.874614 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ab0c-account-create-update-cwnxk" event={"ID":"3fe97d90-558c-4c53-bfe6-21b93c167ede","Type":"ContainerStarted","Data":"29e51a46ccf642dc1b158ee8aecb706d65a9da503e5aa8479794b48cc2c3e781"} Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.876003 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ng955" event={"ID":"ffdcca1e-8be0-4069-888b-08a26ffaf8b0","Type":"ContainerStarted","Data":"923583ea2cfed161f1139f511f73c22377a4b7b75386796c0655a75b0e84e87a"} Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.878362 4711 generic.go:334] "Generic (PLEG): container finished" podID="366b0e56-2601-4ae2-90be-958339d5bde1" containerID="47fa764c7333bee6bc1964e356ee61dbfabb3e9825a022b42293542c1003735d" exitCode=0 Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.878439 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vc295" event={"ID":"366b0e56-2601-4ae2-90be-958339d5bde1","Type":"ContainerDied","Data":"47fa764c7333bee6bc1964e356ee61dbfabb3e9825a022b42293542c1003735d"} Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.880049 4711 generic.go:334] "Generic (PLEG): container finished" podID="ce5ba869-bfa6-40fa-b81b-7b7f3490e36e" containerID="869fbab40f7610a9667a0116670049f9ec34d24319fd225cc0532b1b6e4d438b" exitCode=0 Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.880462 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dcca-account-create-update-w2cd7" event={"ID":"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e","Type":"ContainerDied","Data":"869fbab40f7610a9667a0116670049f9ec34d24319fd225cc0532b1b6e4d438b"} Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.880494 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dcca-account-create-update-w2cd7" event={"ID":"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e","Type":"ContainerStarted","Data":"0762407751d96a09003ccc8dc69aaacdc1f24e8b17e782ad3130b205d66be06c"} Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.882303 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-s2r4g" event={"ID":"12ce0662-b1aa-405b-a577-ebfd14385735","Type":"ContainerStarted","Data":"d83b9e82bbf04d643d3263fff4d4b2e34900c818794e9d4149c09236143f7f19"} Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.884353 4711 generic.go:334] "Generic (PLEG): container finished" podID="a961b46e-6c27-4361-8dbd-7cc28d6b2a32" containerID="a89f89fb79ccf8741e95403278c76345fff6898f9174f73c2f7ed734d89b333f" exitCode=0 Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.884485 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9019-account-create-update-znhxs" event={"ID":"a961b46e-6c27-4361-8dbd-7cc28d6b2a32","Type":"ContainerDied","Data":"a89f89fb79ccf8741e95403278c76345fff6898f9174f73c2f7ed734d89b333f"} Dec 02 10:32:04 crc kubenswrapper[4711]: I1202 10:32:04.884530 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9019-account-create-update-znhxs" event={"ID":"a961b46e-6c27-4361-8dbd-7cc28d6b2a32","Type":"ContainerStarted","Data":"942f4677fb0d0f7fa818a33e64b24942f1dac9532bda5fccb5e599e0b2e2eb9c"} Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.272505 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.371233 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-log-ovn\") pod \"7f116971-a4cd-4f9c-b722-f5b00f630956\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.371364 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7f116971-a4cd-4f9c-b722-f5b00f630956" (UID: "7f116971-a4cd-4f9c-b722-f5b00f630956"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.371575 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7f116971-a4cd-4f9c-b722-f5b00f630956-additional-scripts\") pod \"7f116971-a4cd-4f9c-b722-f5b00f630956\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.371616 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-run-ovn\") pod \"7f116971-a4cd-4f9c-b722-f5b00f630956\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.371635 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f116971-a4cd-4f9c-b722-f5b00f630956-scripts\") pod \"7f116971-a4cd-4f9c-b722-f5b00f630956\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.371704 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7f116971-a4cd-4f9c-b722-f5b00f630956" (UID: "7f116971-a4cd-4f9c-b722-f5b00f630956"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.371820 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-run" (OuterVolumeSpecName: "var-run") pod "7f116971-a4cd-4f9c-b722-f5b00f630956" (UID: "7f116971-a4cd-4f9c-b722-f5b00f630956"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.372184 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f116971-a4cd-4f9c-b722-f5b00f630956-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7f116971-a4cd-4f9c-b722-f5b00f630956" (UID: "7f116971-a4cd-4f9c-b722-f5b00f630956"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.372546 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f116971-a4cd-4f9c-b722-f5b00f630956-scripts" (OuterVolumeSpecName: "scripts") pod "7f116971-a4cd-4f9c-b722-f5b00f630956" (UID: "7f116971-a4cd-4f9c-b722-f5b00f630956"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.372583 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-run\") pod \"7f116971-a4cd-4f9c-b722-f5b00f630956\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.372645 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgmz7\" (UniqueName: \"kubernetes.io/projected/7f116971-a4cd-4f9c-b722-f5b00f630956-kube-api-access-pgmz7\") pod \"7f116971-a4cd-4f9c-b722-f5b00f630956\" (UID: \"7f116971-a4cd-4f9c-b722-f5b00f630956\") " Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.373499 4711 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7f116971-a4cd-4f9c-b722-f5b00f630956-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.373518 4711 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.373526 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f116971-a4cd-4f9c-b722-f5b00f630956-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.373536 4711 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-run\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.373544 4711 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7f116971-a4cd-4f9c-b722-f5b00f630956-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.408502 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f116971-a4cd-4f9c-b722-f5b00f630956-kube-api-access-pgmz7" (OuterVolumeSpecName: "kube-api-access-pgmz7") pod "7f116971-a4cd-4f9c-b722-f5b00f630956" (UID: "7f116971-a4cd-4f9c-b722-f5b00f630956"). InnerVolumeSpecName "kube-api-access-pgmz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.456025 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q57lb-config-pskps"] Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.464375 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-q57lb-config-pskps"] Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.475250 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgmz7\" (UniqueName: \"kubernetes.io/projected/7f116971-a4cd-4f9c-b722-f5b00f630956-kube-api-access-pgmz7\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.516589 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q57lb-config-c7hhx"] Dec 02 10:32:05 crc kubenswrapper[4711]: E1202 10:32:05.517003 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f116971-a4cd-4f9c-b722-f5b00f630956" containerName="ovn-config" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.517027 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f116971-a4cd-4f9c-b722-f5b00f630956" containerName="ovn-config" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.517232 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f116971-a4cd-4f9c-b722-f5b00f630956" containerName="ovn-config" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.517753 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.529996 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q57lb-config-c7hhx"] Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.678317 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/408e02d3-8104-465b-b1ee-0c6f2812df83-scripts\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.678372 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-run-ovn\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.678403 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-run\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.678423 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wkn9\" (UniqueName: \"kubernetes.io/projected/408e02d3-8104-465b-b1ee-0c6f2812df83-kube-api-access-2wkn9\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.678451 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/408e02d3-8104-465b-b1ee-0c6f2812df83-additional-scripts\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.678480 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-log-ovn\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.779645 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-log-ovn\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.779757 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/408e02d3-8104-465b-b1ee-0c6f2812df83-scripts\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.779787 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-run-ovn\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.779812 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-run\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.779828 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wkn9\" (UniqueName: \"kubernetes.io/projected/408e02d3-8104-465b-b1ee-0c6f2812df83-kube-api-access-2wkn9\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.779860 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/408e02d3-8104-465b-b1ee-0c6f2812df83-additional-scripts\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.779987 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-log-ovn\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.780122 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-run\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.780236 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-run-ovn\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.780610 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/408e02d3-8104-465b-b1ee-0c6f2812df83-additional-scripts\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.782490 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/408e02d3-8104-465b-b1ee-0c6f2812df83-scripts\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.798192 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wkn9\" (UniqueName: \"kubernetes.io/projected/408e02d3-8104-465b-b1ee-0c6f2812df83-kube-api-access-2wkn9\") pod \"ovn-controller-q57lb-config-c7hhx\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.832628 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.893810 4711 generic.go:334] "Generic (PLEG): container finished" podID="ffdcca1e-8be0-4069-888b-08a26ffaf8b0" containerID="67d2e19949338fb061366b68700e136150a08318a59df57aec79e7744264e0ec" exitCode=0 Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.893944 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ng955" event={"ID":"ffdcca1e-8be0-4069-888b-08a26ffaf8b0","Type":"ContainerDied","Data":"67d2e19949338fb061366b68700e136150a08318a59df57aec79e7744264e0ec"} Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.896098 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5be12818cb41241f1ba57100dc3ecb129efafbf24ccf5655420b67a86246c9c7" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.896110 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q57lb-config-pskps" Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.901191 4711 generic.go:334] "Generic (PLEG): container finished" podID="3fe97d90-558c-4c53-bfe6-21b93c167ede" containerID="0184e7874cd5edf92cd541ddd24dd35c294a13b384a4735361fd871b1cf17a95" exitCode=0 Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.901264 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ab0c-account-create-update-cwnxk" event={"ID":"3fe97d90-558c-4c53-bfe6-21b93c167ede","Type":"ContainerDied","Data":"0184e7874cd5edf92cd541ddd24dd35c294a13b384a4735361fd871b1cf17a95"} Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.905212 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"badf4b1001f675f1ac303ac1656fe87d05375e84e8663a0efe59a4a093f6061d"} Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.905248 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"268e25f851ba8bcf6a197c02ba84b04bbbac92743882e69f38390541eb0be3ec"} Dec 02 10:32:05 crc kubenswrapper[4711]: I1202 10:32:05.905261 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"4848d4597c12bf24ae66b5396be255c2ace7bfe0dedf8930fe0ba5e373e7490d"} Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.327196 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dcca-account-create-update-w2cd7" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.332319 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vc295" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.338240 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q57lb-config-c7hhx"] Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.420216 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9019-account-create-update-znhxs" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.423515 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jbrrj" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.497543 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366b0e56-2601-4ae2-90be-958339d5bde1-operator-scripts\") pod \"366b0e56-2601-4ae2-90be-958339d5bde1\" (UID: \"366b0e56-2601-4ae2-90be-958339d5bde1\") " Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.497607 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e-operator-scripts\") pod \"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e\" (UID: \"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e\") " Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.497683 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjm9k\" (UniqueName: \"kubernetes.io/projected/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e-kube-api-access-jjm9k\") pod \"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e\" (UID: \"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e\") " Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.497800 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j79rc\" (UniqueName: \"kubernetes.io/projected/366b0e56-2601-4ae2-90be-958339d5bde1-kube-api-access-j79rc\") pod \"366b0e56-2601-4ae2-90be-958339d5bde1\" (UID: \"366b0e56-2601-4ae2-90be-958339d5bde1\") " Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.498784 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/366b0e56-2601-4ae2-90be-958339d5bde1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "366b0e56-2601-4ae2-90be-958339d5bde1" (UID: "366b0e56-2601-4ae2-90be-958339d5bde1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.498777 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce5ba869-bfa6-40fa-b81b-7b7f3490e36e" (UID: "ce5ba869-bfa6-40fa-b81b-7b7f3490e36e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.503804 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e-kube-api-access-jjm9k" (OuterVolumeSpecName: "kube-api-access-jjm9k") pod "ce5ba869-bfa6-40fa-b81b-7b7f3490e36e" (UID: "ce5ba869-bfa6-40fa-b81b-7b7f3490e36e"). InnerVolumeSpecName "kube-api-access-jjm9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.505521 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/366b0e56-2601-4ae2-90be-958339d5bde1-kube-api-access-j79rc" (OuterVolumeSpecName: "kube-api-access-j79rc") pod "366b0e56-2601-4ae2-90be-958339d5bde1" (UID: "366b0e56-2601-4ae2-90be-958339d5bde1"). InnerVolumeSpecName "kube-api-access-j79rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.600771 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jsqn\" (UniqueName: \"kubernetes.io/projected/a961b46e-6c27-4361-8dbd-7cc28d6b2a32-kube-api-access-8jsqn\") pod \"a961b46e-6c27-4361-8dbd-7cc28d6b2a32\" (UID: \"a961b46e-6c27-4361-8dbd-7cc28d6b2a32\") " Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.600839 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5t2c\" (UniqueName: \"kubernetes.io/projected/91b4e096-f633-4842-a5e1-9cc10c99ff50-kube-api-access-v5t2c\") pod \"91b4e096-f633-4842-a5e1-9cc10c99ff50\" (UID: \"91b4e096-f633-4842-a5e1-9cc10c99ff50\") " Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.600920 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b4e096-f633-4842-a5e1-9cc10c99ff50-operator-scripts\") pod \"91b4e096-f633-4842-a5e1-9cc10c99ff50\" (UID: \"91b4e096-f633-4842-a5e1-9cc10c99ff50\") " Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.600994 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a961b46e-6c27-4361-8dbd-7cc28d6b2a32-operator-scripts\") pod \"a961b46e-6c27-4361-8dbd-7cc28d6b2a32\" (UID: \"a961b46e-6c27-4361-8dbd-7cc28d6b2a32\") " Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.601430 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366b0e56-2601-4ae2-90be-958339d5bde1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.601454 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.601471 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjm9k\" (UniqueName: \"kubernetes.io/projected/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e-kube-api-access-jjm9k\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.601486 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j79rc\" (UniqueName: \"kubernetes.io/projected/366b0e56-2601-4ae2-90be-958339d5bde1-kube-api-access-j79rc\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.601873 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b4e096-f633-4842-a5e1-9cc10c99ff50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91b4e096-f633-4842-a5e1-9cc10c99ff50" (UID: "91b4e096-f633-4842-a5e1-9cc10c99ff50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.602017 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a961b46e-6c27-4361-8dbd-7cc28d6b2a32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a961b46e-6c27-4361-8dbd-7cc28d6b2a32" (UID: "a961b46e-6c27-4361-8dbd-7cc28d6b2a32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.604706 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b4e096-f633-4842-a5e1-9cc10c99ff50-kube-api-access-v5t2c" (OuterVolumeSpecName: "kube-api-access-v5t2c") pod "91b4e096-f633-4842-a5e1-9cc10c99ff50" (UID: "91b4e096-f633-4842-a5e1-9cc10c99ff50"). InnerVolumeSpecName "kube-api-access-v5t2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.606613 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a961b46e-6c27-4361-8dbd-7cc28d6b2a32-kube-api-access-8jsqn" (OuterVolumeSpecName: "kube-api-access-8jsqn") pod "a961b46e-6c27-4361-8dbd-7cc28d6b2a32" (UID: "a961b46e-6c27-4361-8dbd-7cc28d6b2a32"). InnerVolumeSpecName "kube-api-access-8jsqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.704069 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a961b46e-6c27-4361-8dbd-7cc28d6b2a32-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.704138 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jsqn\" (UniqueName: \"kubernetes.io/projected/a961b46e-6c27-4361-8dbd-7cc28d6b2a32-kube-api-access-8jsqn\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.704163 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5t2c\" (UniqueName: \"kubernetes.io/projected/91b4e096-f633-4842-a5e1-9cc10c99ff50-kube-api-access-v5t2c\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.704176 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b4e096-f633-4842-a5e1-9cc10c99ff50-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.914959 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vc295" event={"ID":"366b0e56-2601-4ae2-90be-958339d5bde1","Type":"ContainerDied","Data":"64f0ce125214a63081086ad0e63fc689485090cf2f2951f430e64c0ee505f28c"} Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.915492 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64f0ce125214a63081086ad0e63fc689485090cf2f2951f430e64c0ee505f28c" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.915088 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vc295" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.917522 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dcca-account-create-update-w2cd7" event={"ID":"ce5ba869-bfa6-40fa-b81b-7b7f3490e36e","Type":"ContainerDied","Data":"0762407751d96a09003ccc8dc69aaacdc1f24e8b17e782ad3130b205d66be06c"} Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.917580 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0762407751d96a09003ccc8dc69aaacdc1f24e8b17e782ad3130b205d66be06c" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.917579 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dcca-account-create-update-w2cd7" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.919361 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9019-account-create-update-znhxs" event={"ID":"a961b46e-6c27-4361-8dbd-7cc28d6b2a32","Type":"ContainerDied","Data":"942f4677fb0d0f7fa818a33e64b24942f1dac9532bda5fccb5e599e0b2e2eb9c"} Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.919402 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942f4677fb0d0f7fa818a33e64b24942f1dac9532bda5fccb5e599e0b2e2eb9c" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.919368 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9019-account-create-update-znhxs" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.921010 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jbrrj" event={"ID":"91b4e096-f633-4842-a5e1-9cc10c99ff50","Type":"ContainerDied","Data":"56bcf2068e59f43cb10343d984a1f0778dfd9119fe24870d174121aa244bd06f"} Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.921059 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56bcf2068e59f43cb10343d984a1f0778dfd9119fe24870d174121aa244bd06f" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.921039 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jbrrj" Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.924234 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"8bad97c532ce5ad4619585a78f9769a62341d5e2a6548586da137db83c06731d"} Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.926013 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q57lb-config-c7hhx" event={"ID":"408e02d3-8104-465b-b1ee-0c6f2812df83","Type":"ContainerStarted","Data":"f25da8488085ba3ab744b7fb07dfac8c755f3b19c1b1982493783b6a8fc5f856"} Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.926084 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q57lb-config-c7hhx" event={"ID":"408e02d3-8104-465b-b1ee-0c6f2812df83","Type":"ContainerStarted","Data":"ee4301523fac790ae5d172599d839a08d67f7548225ac28c63de5d56c6a52953"} Dec 02 10:32:06 crc kubenswrapper[4711]: I1202 10:32:06.956403 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-q57lb-config-c7hhx" podStartSLOduration=1.956380608 podStartE2EDuration="1.956380608s" podCreationTimestamp="2025-12-02 10:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:32:06.952357458 +0000 UTC m=+1116.661723935" watchObservedRunningTime="2025-12-02 10:32:06.956380608 +0000 UTC m=+1116.665747045" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.088078 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f116971-a4cd-4f9c-b722-f5b00f630956" path="/var/lib/kubelet/pods/7f116971-a4cd-4f9c-b722-f5b00f630956/volumes" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.299897 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ng955" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.305632 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ab0c-account-create-update-cwnxk" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.432906 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffdcca1e-8be0-4069-888b-08a26ffaf8b0-operator-scripts\") pod \"ffdcca1e-8be0-4069-888b-08a26ffaf8b0\" (UID: \"ffdcca1e-8be0-4069-888b-08a26ffaf8b0\") " Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.433596 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffdcca1e-8be0-4069-888b-08a26ffaf8b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ffdcca1e-8be0-4069-888b-08a26ffaf8b0" (UID: "ffdcca1e-8be0-4069-888b-08a26ffaf8b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.433666 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzppc\" (UniqueName: \"kubernetes.io/projected/ffdcca1e-8be0-4069-888b-08a26ffaf8b0-kube-api-access-rzppc\") pod \"ffdcca1e-8be0-4069-888b-08a26ffaf8b0\" (UID: \"ffdcca1e-8be0-4069-888b-08a26ffaf8b0\") " Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.434436 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4c5t\" (UniqueName: \"kubernetes.io/projected/3fe97d90-558c-4c53-bfe6-21b93c167ede-kube-api-access-q4c5t\") pod \"3fe97d90-558c-4c53-bfe6-21b93c167ede\" (UID: \"3fe97d90-558c-4c53-bfe6-21b93c167ede\") " Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.434763 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fe97d90-558c-4c53-bfe6-21b93c167ede-operator-scripts\") pod \"3fe97d90-558c-4c53-bfe6-21b93c167ede\" (UID: \"3fe97d90-558c-4c53-bfe6-21b93c167ede\") " Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.435044 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fe97d90-558c-4c53-bfe6-21b93c167ede-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fe97d90-558c-4c53-bfe6-21b93c167ede" (UID: "3fe97d90-558c-4c53-bfe6-21b93c167ede"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.436494 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fe97d90-558c-4c53-bfe6-21b93c167ede-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.436518 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffdcca1e-8be0-4069-888b-08a26ffaf8b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.439328 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffdcca1e-8be0-4069-888b-08a26ffaf8b0-kube-api-access-rzppc" (OuterVolumeSpecName: "kube-api-access-rzppc") pod "ffdcca1e-8be0-4069-888b-08a26ffaf8b0" (UID: "ffdcca1e-8be0-4069-888b-08a26ffaf8b0"). InnerVolumeSpecName "kube-api-access-rzppc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.439701 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe97d90-558c-4c53-bfe6-21b93c167ede-kube-api-access-q4c5t" (OuterVolumeSpecName: "kube-api-access-q4c5t") pod "3fe97d90-558c-4c53-bfe6-21b93c167ede" (UID: "3fe97d90-558c-4c53-bfe6-21b93c167ede"). InnerVolumeSpecName "kube-api-access-q4c5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.538318 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzppc\" (UniqueName: \"kubernetes.io/projected/ffdcca1e-8be0-4069-888b-08a26ffaf8b0-kube-api-access-rzppc\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.538350 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4c5t\" (UniqueName: \"kubernetes.io/projected/3fe97d90-558c-4c53-bfe6-21b93c167ede-kube-api-access-q4c5t\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.564397 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-q57lb" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.937551 4711 generic.go:334] "Generic (PLEG): container finished" podID="408e02d3-8104-465b-b1ee-0c6f2812df83" containerID="f25da8488085ba3ab744b7fb07dfac8c755f3b19c1b1982493783b6a8fc5f856" exitCode=0 Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.937687 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q57lb-config-c7hhx" event={"ID":"408e02d3-8104-465b-b1ee-0c6f2812df83","Type":"ContainerDied","Data":"f25da8488085ba3ab744b7fb07dfac8c755f3b19c1b1982493783b6a8fc5f856"} Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.945248 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ab0c-account-create-update-cwnxk" event={"ID":"3fe97d90-558c-4c53-bfe6-21b93c167ede","Type":"ContainerDied","Data":"29e51a46ccf642dc1b158ee8aecb706d65a9da503e5aa8479794b48cc2c3e781"} Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.945296 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e51a46ccf642dc1b158ee8aecb706d65a9da503e5aa8479794b48cc2c3e781" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.945326 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ab0c-account-create-update-cwnxk" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.946885 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ng955" event={"ID":"ffdcca1e-8be0-4069-888b-08a26ffaf8b0","Type":"ContainerDied","Data":"923583ea2cfed161f1139f511f73c22377a4b7b75386796c0655a75b0e84e87a"} Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.946912 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="923583ea2cfed161f1139f511f73c22377a4b7b75386796c0655a75b0e84e87a" Dec 02 10:32:07 crc kubenswrapper[4711]: I1202 10:32:07.946982 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ng955" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.452434 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.605656 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-run\") pod \"408e02d3-8104-465b-b1ee-0c6f2812df83\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.605775 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/408e02d3-8104-465b-b1ee-0c6f2812df83-additional-scripts\") pod \"408e02d3-8104-465b-b1ee-0c6f2812df83\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.605820 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-run-ovn\") pod \"408e02d3-8104-465b-b1ee-0c6f2812df83\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.605907 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-log-ovn\") pod \"408e02d3-8104-465b-b1ee-0c6f2812df83\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.606014 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wkn9\" (UniqueName: \"kubernetes.io/projected/408e02d3-8104-465b-b1ee-0c6f2812df83-kube-api-access-2wkn9\") pod \"408e02d3-8104-465b-b1ee-0c6f2812df83\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.606050 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/408e02d3-8104-465b-b1ee-0c6f2812df83-scripts\") pod \"408e02d3-8104-465b-b1ee-0c6f2812df83\" (UID: \"408e02d3-8104-465b-b1ee-0c6f2812df83\") " Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.606455 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "408e02d3-8104-465b-b1ee-0c6f2812df83" (UID: "408e02d3-8104-465b-b1ee-0c6f2812df83"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.606470 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/408e02d3-8104-465b-b1ee-0c6f2812df83-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "408e02d3-8104-465b-b1ee-0c6f2812df83" (UID: "408e02d3-8104-465b-b1ee-0c6f2812df83"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.606489 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-run" (OuterVolumeSpecName: "var-run") pod "408e02d3-8104-465b-b1ee-0c6f2812df83" (UID: "408e02d3-8104-465b-b1ee-0c6f2812df83"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.606872 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "408e02d3-8104-465b-b1ee-0c6f2812df83" (UID: "408e02d3-8104-465b-b1ee-0c6f2812df83"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.608006 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/408e02d3-8104-465b-b1ee-0c6f2812df83-scripts" (OuterVolumeSpecName: "scripts") pod "408e02d3-8104-465b-b1ee-0c6f2812df83" (UID: "408e02d3-8104-465b-b1ee-0c6f2812df83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.612602 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/408e02d3-8104-465b-b1ee-0c6f2812df83-kube-api-access-2wkn9" (OuterVolumeSpecName: "kube-api-access-2wkn9") pod "408e02d3-8104-465b-b1ee-0c6f2812df83" (UID: "408e02d3-8104-465b-b1ee-0c6f2812df83"). InnerVolumeSpecName "kube-api-access-2wkn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.707780 4711 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.707817 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wkn9\" (UniqueName: \"kubernetes.io/projected/408e02d3-8104-465b-b1ee-0c6f2812df83-kube-api-access-2wkn9\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.707828 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/408e02d3-8104-465b-b1ee-0c6f2812df83-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.707840 4711 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-run\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.707849 4711 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/408e02d3-8104-465b-b1ee-0c6f2812df83-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.707858 4711 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/408e02d3-8104-465b-b1ee-0c6f2812df83-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:11 crc kubenswrapper[4711]: I1202 10:32:11.996429 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-s2r4g" event={"ID":"12ce0662-b1aa-405b-a577-ebfd14385735","Type":"ContainerStarted","Data":"24d1a5b493a52689447976f3350264a9cd16de866ad7a1cbd07b6cf9f89d4626"} Dec 02 10:32:12 crc kubenswrapper[4711]: I1202 10:32:12.006776 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"1159991d2b90b88144c3039b9c8df5690f39ce348c4ca4c7379a14a8dfd4dc1d"} Dec 02 10:32:12 crc kubenswrapper[4711]: I1202 10:32:12.006819 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"b74ae4e46e4103f4f7cb13aa1869375ae94af24bc4fea2b09c203c461209c757"} Dec 02 10:32:12 crc kubenswrapper[4711]: I1202 10:32:12.006829 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"1fc7a9dc3a8c4c22d7a240abef6b8eadebca3876602a8019ce3c55e6d35bfb95"} Dec 02 10:32:12 crc kubenswrapper[4711]: I1202 10:32:12.008708 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q57lb-config-c7hhx" event={"ID":"408e02d3-8104-465b-b1ee-0c6f2812df83","Type":"ContainerDied","Data":"ee4301523fac790ae5d172599d839a08d67f7548225ac28c63de5d56c6a52953"} Dec 02 10:32:12 crc kubenswrapper[4711]: I1202 10:32:12.008732 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee4301523fac790ae5d172599d839a08d67f7548225ac28c63de5d56c6a52953" Dec 02 10:32:12 crc kubenswrapper[4711]: I1202 10:32:12.008796 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q57lb-config-c7hhx" Dec 02 10:32:12 crc kubenswrapper[4711]: I1202 10:32:12.027163 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-s2r4g" podStartSLOduration=2.266930995 podStartE2EDuration="9.027133685s" podCreationTimestamp="2025-12-02 10:32:03 +0000 UTC" firstStartedPulling="2025-12-02 10:32:04.679218752 +0000 UTC m=+1114.388585199" lastFinishedPulling="2025-12-02 10:32:11.439421402 +0000 UTC m=+1121.148787889" observedRunningTime="2025-12-02 10:32:12.019909279 +0000 UTC m=+1121.729275736" watchObservedRunningTime="2025-12-02 10:32:12.027133685 +0000 UTC m=+1121.736500142" Dec 02 10:32:12 crc kubenswrapper[4711]: I1202 10:32:12.577858 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q57lb-config-c7hhx"] Dec 02 10:32:12 crc kubenswrapper[4711]: I1202 10:32:12.606567 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-q57lb-config-c7hhx"] Dec 02 10:32:13 crc kubenswrapper[4711]: I1202 10:32:13.031614 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"fa70960c044746571fe183d8e758be96cfd8da8f7a8405ac9a98001e2bfd6dd5"} Dec 02 10:32:13 crc kubenswrapper[4711]: I1202 10:32:13.093855 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="408e02d3-8104-465b-b1ee-0c6f2812df83" path="/var/lib/kubelet/pods/408e02d3-8104-465b-b1ee-0c6f2812df83/volumes" Dec 02 10:32:14 crc kubenswrapper[4711]: I1202 10:32:14.042995 4711 generic.go:334] "Generic (PLEG): container finished" podID="2b94dbaa-33c0-42b0-b71a-9af5fda1a876" containerID="cb3977ccc770ae471abbd9e993cb93fda494ef6e15702b11add2b16bec094fed" exitCode=0 Dec 02 10:32:14 crc kubenswrapper[4711]: I1202 10:32:14.043052 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-svtvp" event={"ID":"2b94dbaa-33c0-42b0-b71a-9af5fda1a876","Type":"ContainerDied","Data":"cb3977ccc770ae471abbd9e993cb93fda494ef6e15702b11add2b16bec094fed"} Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.602165 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-svtvp" Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.794512 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-combined-ca-bundle\") pod \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.795208 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz7wb\" (UniqueName: \"kubernetes.io/projected/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-kube-api-access-wz7wb\") pod \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.795310 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-db-sync-config-data\") pod \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.795425 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-config-data\") pod \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\" (UID: \"2b94dbaa-33c0-42b0-b71a-9af5fda1a876\") " Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.803728 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-kube-api-access-wz7wb" (OuterVolumeSpecName: "kube-api-access-wz7wb") pod "2b94dbaa-33c0-42b0-b71a-9af5fda1a876" (UID: "2b94dbaa-33c0-42b0-b71a-9af5fda1a876"). InnerVolumeSpecName "kube-api-access-wz7wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.813713 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2b94dbaa-33c0-42b0-b71a-9af5fda1a876" (UID: "2b94dbaa-33c0-42b0-b71a-9af5fda1a876"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.843235 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b94dbaa-33c0-42b0-b71a-9af5fda1a876" (UID: "2b94dbaa-33c0-42b0-b71a-9af5fda1a876"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.886346 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-config-data" (OuterVolumeSpecName: "config-data") pod "2b94dbaa-33c0-42b0-b71a-9af5fda1a876" (UID: "2b94dbaa-33c0-42b0-b71a-9af5fda1a876"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.899413 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.899476 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz7wb\" (UniqueName: \"kubernetes.io/projected/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-kube-api-access-wz7wb\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.899498 4711 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:15 crc kubenswrapper[4711]: I1202 10:32:15.899519 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b94dbaa-33c0-42b0-b71a-9af5fda1a876-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.066696 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-svtvp" event={"ID":"2b94dbaa-33c0-42b0-b71a-9af5fda1a876","Type":"ContainerDied","Data":"35570640b8337029e1cfdf46524f8287594d6d5056869d864413979ceb952596"} Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.066754 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35570640b8337029e1cfdf46524f8287594d6d5056869d864413979ceb952596" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.066767 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-svtvp" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.595742 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-9gp2w"] Dec 02 10:32:16 crc kubenswrapper[4711]: E1202 10:32:16.596234 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b4e096-f633-4842-a5e1-9cc10c99ff50" containerName="mariadb-database-create" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596250 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b4e096-f633-4842-a5e1-9cc10c99ff50" containerName="mariadb-database-create" Dec 02 10:32:16 crc kubenswrapper[4711]: E1202 10:32:16.596271 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5ba869-bfa6-40fa-b81b-7b7f3490e36e" containerName="mariadb-account-create-update" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596281 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5ba869-bfa6-40fa-b81b-7b7f3490e36e" containerName="mariadb-account-create-update" Dec 02 10:32:16 crc kubenswrapper[4711]: E1202 10:32:16.596311 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366b0e56-2601-4ae2-90be-958339d5bde1" containerName="mariadb-database-create" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596319 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="366b0e56-2601-4ae2-90be-958339d5bde1" containerName="mariadb-database-create" Dec 02 10:32:16 crc kubenswrapper[4711]: E1202 10:32:16.596334 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a961b46e-6c27-4361-8dbd-7cc28d6b2a32" containerName="mariadb-account-create-update" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596342 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="a961b46e-6c27-4361-8dbd-7cc28d6b2a32" containerName="mariadb-account-create-update" Dec 02 10:32:16 crc kubenswrapper[4711]: E1202 10:32:16.596351 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b94dbaa-33c0-42b0-b71a-9af5fda1a876" containerName="glance-db-sync" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596358 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b94dbaa-33c0-42b0-b71a-9af5fda1a876" containerName="glance-db-sync" Dec 02 10:32:16 crc kubenswrapper[4711]: E1202 10:32:16.596375 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="408e02d3-8104-465b-b1ee-0c6f2812df83" containerName="ovn-config" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596382 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="408e02d3-8104-465b-b1ee-0c6f2812df83" containerName="ovn-config" Dec 02 10:32:16 crc kubenswrapper[4711]: E1202 10:32:16.596402 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffdcca1e-8be0-4069-888b-08a26ffaf8b0" containerName="mariadb-database-create" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596410 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffdcca1e-8be0-4069-888b-08a26ffaf8b0" containerName="mariadb-database-create" Dec 02 10:32:16 crc kubenswrapper[4711]: E1202 10:32:16.596423 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe97d90-558c-4c53-bfe6-21b93c167ede" containerName="mariadb-account-create-update" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596430 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe97d90-558c-4c53-bfe6-21b93c167ede" containerName="mariadb-account-create-update" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596617 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe97d90-558c-4c53-bfe6-21b93c167ede" containerName="mariadb-account-create-update" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596639 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffdcca1e-8be0-4069-888b-08a26ffaf8b0" containerName="mariadb-database-create" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596657 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b4e096-f633-4842-a5e1-9cc10c99ff50" containerName="mariadb-database-create" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596671 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5ba869-bfa6-40fa-b81b-7b7f3490e36e" containerName="mariadb-account-create-update" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596688 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="366b0e56-2601-4ae2-90be-958339d5bde1" containerName="mariadb-database-create" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596697 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="408e02d3-8104-465b-b1ee-0c6f2812df83" containerName="ovn-config" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596709 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="a961b46e-6c27-4361-8dbd-7cc28d6b2a32" containerName="mariadb-account-create-update" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.596722 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b94dbaa-33c0-42b0-b71a-9af5fda1a876" containerName="glance-db-sync" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.597584 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.609860 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-9gp2w"] Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.714851 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-config\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.714932 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.715007 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-dns-svc\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.715058 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx2wq\" (UniqueName: \"kubernetes.io/projected/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-kube-api-access-kx2wq\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.715102 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.816166 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-config\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.816207 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.816250 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-dns-svc\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.816287 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx2wq\" (UniqueName: \"kubernetes.io/projected/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-kube-api-access-kx2wq\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.816308 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.817313 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-config\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.817329 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.817986 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.818155 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-dns-svc\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.835081 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx2wq\" (UniqueName: \"kubernetes.io/projected/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-kube-api-access-kx2wq\") pod \"dnsmasq-dns-74dc88fc-9gp2w\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:16 crc kubenswrapper[4711]: I1202 10:32:16.913049 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:17 crc kubenswrapper[4711]: I1202 10:32:17.426756 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-9gp2w"] Dec 02 10:32:17 crc kubenswrapper[4711]: W1202 10:32:17.457447 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a870e5d_69c1_4cc6_9e69_cdeaca82b49f.slice/crio-37f87a1ad0842f44591fb01d0eb0c16edec37d2db084da2d7f7094afaf3256f6 WatchSource:0}: Error finding container 37f87a1ad0842f44591fb01d0eb0c16edec37d2db084da2d7f7094afaf3256f6: Status 404 returned error can't find the container with id 37f87a1ad0842f44591fb01d0eb0c16edec37d2db084da2d7f7094afaf3256f6 Dec 02 10:32:18 crc kubenswrapper[4711]: I1202 10:32:18.093766 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" event={"ID":"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f","Type":"ContainerStarted","Data":"37f87a1ad0842f44591fb01d0eb0c16edec37d2db084da2d7f7094afaf3256f6"} Dec 02 10:32:22 crc kubenswrapper[4711]: I1202 10:32:22.131066 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" event={"ID":"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f","Type":"ContainerStarted","Data":"d8c5f5761a3e2976271da06c951ee759468cc338c84ece8c9534b6c815521077"} Dec 02 10:32:22 crc kubenswrapper[4711]: I1202 10:32:22.586175 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:32:22 crc kubenswrapper[4711]: I1202 10:32:22.586261 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:32:25 crc kubenswrapper[4711]: I1202 10:32:25.168782 4711 generic.go:334] "Generic (PLEG): container finished" podID="8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" containerID="d8c5f5761a3e2976271da06c951ee759468cc338c84ece8c9534b6c815521077" exitCode=0 Dec 02 10:32:25 crc kubenswrapper[4711]: I1202 10:32:25.168842 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" event={"ID":"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f","Type":"ContainerDied","Data":"d8c5f5761a3e2976271da06c951ee759468cc338c84ece8c9534b6c815521077"} Dec 02 10:32:25 crc kubenswrapper[4711]: I1202 10:32:25.186287 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"da4ca046560c414e829561d05ca3c80ee902de3a6cef36f28ab858296169f8ef"} Dec 02 10:32:26 crc kubenswrapper[4711]: I1202 10:32:26.199072 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" event={"ID":"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f","Type":"ContainerStarted","Data":"39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b"} Dec 02 10:32:26 crc kubenswrapper[4711]: I1202 10:32:26.199542 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:26 crc kubenswrapper[4711]: I1202 10:32:26.209537 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"5d71954552ed0a2f1929d91ee9ac20388d21513f70209f14d8b0f7bac0a22f90"} Dec 02 10:32:26 crc kubenswrapper[4711]: I1202 10:32:26.209605 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"b90ecceccaaf5ee57ed68e6bb166454589ca8d26f75cded22b522fd091613e22"} Dec 02 10:32:26 crc kubenswrapper[4711]: I1202 10:32:26.209623 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"cf0e8e8e62e11d8873f52b3ce512b3048be14a03bb15dcfc62bb1dc4769c7b4c"} Dec 02 10:32:26 crc kubenswrapper[4711]: I1202 10:32:26.225583 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" podStartSLOduration=10.225560724 podStartE2EDuration="10.225560724s" podCreationTimestamp="2025-12-02 10:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:32:26.221396951 +0000 UTC m=+1135.930763418" watchObservedRunningTime="2025-12-02 10:32:26.225560724 +0000 UTC m=+1135.934927181" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.232561 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"fb54826b67152ac8ade17a4deb82bf0cfd49d8e913fd3337bc47f71fd03af927"} Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.233071 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"052d06d83518470b569a49070aa1a3dcd87e9a827dd47d928c304133f952b7c7"} Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.233099 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"23030cd9-0bb2-4574-8c49-405bef4719b5","Type":"ContainerStarted","Data":"0629706a820d77785f44bb8636d47bfa21c56d5a92d06fcc29714cd8818f407b"} Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.235022 4711 generic.go:334] "Generic (PLEG): container finished" podID="12ce0662-b1aa-405b-a577-ebfd14385735" containerID="24d1a5b493a52689447976f3350264a9cd16de866ad7a1cbd07b6cf9f89d4626" exitCode=0 Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.235135 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-s2r4g" event={"ID":"12ce0662-b1aa-405b-a577-ebfd14385735","Type":"ContainerDied","Data":"24d1a5b493a52689447976f3350264a9cd16de866ad7a1cbd07b6cf9f89d4626"} Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.289233 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=44.257775565 podStartE2EDuration="59.289204855s" podCreationTimestamp="2025-12-02 10:31:28 +0000 UTC" firstStartedPulling="2025-12-02 10:32:02.431573389 +0000 UTC m=+1112.140939836" lastFinishedPulling="2025-12-02 10:32:17.463002679 +0000 UTC m=+1127.172369126" observedRunningTime="2025-12-02 10:32:27.274218137 +0000 UTC m=+1136.983584604" watchObservedRunningTime="2025-12-02 10:32:27.289204855 +0000 UTC m=+1136.998571332" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.592810 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-9gp2w"] Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.606354 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-r4vv5"] Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.608301 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.610608 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.628474 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-r4vv5"] Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.702638 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.702936 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgxd5\" (UniqueName: \"kubernetes.io/projected/aaa776cd-384c-4d18-9842-8be5867efb54-kube-api-access-bgxd5\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.703104 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.703222 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-config\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.703318 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.703419 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.804723 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.804775 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-config\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.804804 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.804836 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.804860 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.804930 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgxd5\" (UniqueName: \"kubernetes.io/projected/aaa776cd-384c-4d18-9842-8be5867efb54-kube-api-access-bgxd5\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.805688 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.805780 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-config\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.805808 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.805899 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.806168 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.826665 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgxd5\" (UniqueName: \"kubernetes.io/projected/aaa776cd-384c-4d18-9842-8be5867efb54-kube-api-access-bgxd5\") pod \"dnsmasq-dns-5f59b8f679-r4vv5\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:27 crc kubenswrapper[4711]: I1202 10:32:27.926099 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.245873 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" podUID="8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" containerName="dnsmasq-dns" containerID="cri-o://39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b" gracePeriod=10 Dec 02 10:32:28 crc kubenswrapper[4711]: E1202 10:32:28.342838 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a870e5d_69c1_4cc6_9e69_cdeaca82b49f.slice/crio-conmon-39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b.scope\": RecentStats: unable to find data in memory cache]" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.402436 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-r4vv5"] Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.557431 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.713008 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.719350 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ce0662-b1aa-405b-a577-ebfd14385735-config-data\") pod \"12ce0662-b1aa-405b-a577-ebfd14385735\" (UID: \"12ce0662-b1aa-405b-a577-ebfd14385735\") " Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.719418 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vkzv\" (UniqueName: \"kubernetes.io/projected/12ce0662-b1aa-405b-a577-ebfd14385735-kube-api-access-9vkzv\") pod \"12ce0662-b1aa-405b-a577-ebfd14385735\" (UID: \"12ce0662-b1aa-405b-a577-ebfd14385735\") " Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.719636 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ce0662-b1aa-405b-a577-ebfd14385735-combined-ca-bundle\") pod \"12ce0662-b1aa-405b-a577-ebfd14385735\" (UID: \"12ce0662-b1aa-405b-a577-ebfd14385735\") " Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.725882 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ce0662-b1aa-405b-a577-ebfd14385735-kube-api-access-9vkzv" (OuterVolumeSpecName: "kube-api-access-9vkzv") pod "12ce0662-b1aa-405b-a577-ebfd14385735" (UID: "12ce0662-b1aa-405b-a577-ebfd14385735"). InnerVolumeSpecName "kube-api-access-9vkzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.784654 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ce0662-b1aa-405b-a577-ebfd14385735-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12ce0662-b1aa-405b-a577-ebfd14385735" (UID: "12ce0662-b1aa-405b-a577-ebfd14385735"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.821227 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-dns-svc\") pod \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.821290 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-config\") pod \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.821442 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-ovsdbserver-sb\") pod \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.821511 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx2wq\" (UniqueName: \"kubernetes.io/projected/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-kube-api-access-kx2wq\") pod \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.821553 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-ovsdbserver-nb\") pod \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\" (UID: \"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f\") " Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.822115 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ce0662-b1aa-405b-a577-ebfd14385735-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.822144 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vkzv\" (UniqueName: \"kubernetes.io/projected/12ce0662-b1aa-405b-a577-ebfd14385735-kube-api-access-9vkzv\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.829742 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-kube-api-access-kx2wq" (OuterVolumeSpecName: "kube-api-access-kx2wq") pod "8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" (UID: "8a870e5d-69c1-4cc6-9e69-cdeaca82b49f"). InnerVolumeSpecName "kube-api-access-kx2wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.841013 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ce0662-b1aa-405b-a577-ebfd14385735-config-data" (OuterVolumeSpecName: "config-data") pod "12ce0662-b1aa-405b-a577-ebfd14385735" (UID: "12ce0662-b1aa-405b-a577-ebfd14385735"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.874612 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-config" (OuterVolumeSpecName: "config") pod "8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" (UID: "8a870e5d-69c1-4cc6-9e69-cdeaca82b49f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.874708 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" (UID: "8a870e5d-69c1-4cc6-9e69-cdeaca82b49f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.875321 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" (UID: "8a870e5d-69c1-4cc6-9e69-cdeaca82b49f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.896464 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" (UID: "8a870e5d-69c1-4cc6-9e69-cdeaca82b49f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.924127 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ce0662-b1aa-405b-a577-ebfd14385735-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.924167 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.924180 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.924191 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.924205 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx2wq\" (UniqueName: \"kubernetes.io/projected/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-kube-api-access-kx2wq\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:28 crc kubenswrapper[4711]: I1202 10:32:28.924217 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.260556 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-s2r4g" event={"ID":"12ce0662-b1aa-405b-a577-ebfd14385735","Type":"ContainerDied","Data":"d83b9e82bbf04d643d3263fff4d4b2e34900c818794e9d4149c09236143f7f19"} Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.260592 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-s2r4g" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.260748 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d83b9e82bbf04d643d3263fff4d4b2e34900c818794e9d4149c09236143f7f19" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.265824 4711 generic.go:334] "Generic (PLEG): container finished" podID="8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" containerID="39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b" exitCode=0 Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.265997 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" event={"ID":"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f","Type":"ContainerDied","Data":"39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b"} Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.266057 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" event={"ID":"8a870e5d-69c1-4cc6-9e69-cdeaca82b49f","Type":"ContainerDied","Data":"37f87a1ad0842f44591fb01d0eb0c16edec37d2db084da2d7f7094afaf3256f6"} Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.266120 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-9gp2w" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.266122 4711 scope.go:117] "RemoveContainer" containerID="39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.268864 4711 generic.go:334] "Generic (PLEG): container finished" podID="aaa776cd-384c-4d18-9842-8be5867efb54" containerID="7dad17f7f192b1f7ff4f8d4a32c63c10706297febe7eb3745c1790b81daebe86" exitCode=0 Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.268913 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" event={"ID":"aaa776cd-384c-4d18-9842-8be5867efb54","Type":"ContainerDied","Data":"7dad17f7f192b1f7ff4f8d4a32c63c10706297febe7eb3745c1790b81daebe86"} Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.269080 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" event={"ID":"aaa776cd-384c-4d18-9842-8be5867efb54","Type":"ContainerStarted","Data":"47f1cabb225f8921189f711a65908812d7c94494a1a913c10ce62308af309524"} Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.315919 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-9gp2w"] Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.336176 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-9gp2w"] Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.439367 4711 scope.go:117] "RemoveContainer" containerID="d8c5f5761a3e2976271da06c951ee759468cc338c84ece8c9534b6c815521077" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.472256 4711 scope.go:117] "RemoveContainer" containerID="39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b" Dec 02 10:32:29 crc kubenswrapper[4711]: E1202 10:32:29.472895 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b\": container with ID starting with 39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b not found: ID does not exist" containerID="39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.473026 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b"} err="failed to get container status \"39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b\": rpc error: code = NotFound desc = could not find container \"39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b\": container with ID starting with 39ce18c744aba860ef11f7817831899da518c4505123a512b925d62470466e3b not found: ID does not exist" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.473083 4711 scope.go:117] "RemoveContainer" containerID="d8c5f5761a3e2976271da06c951ee759468cc338c84ece8c9534b6c815521077" Dec 02 10:32:29 crc kubenswrapper[4711]: E1202 10:32:29.475630 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c5f5761a3e2976271da06c951ee759468cc338c84ece8c9534b6c815521077\": container with ID starting with d8c5f5761a3e2976271da06c951ee759468cc338c84ece8c9534b6c815521077 not found: ID does not exist" containerID="d8c5f5761a3e2976271da06c951ee759468cc338c84ece8c9534b6c815521077" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.475674 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c5f5761a3e2976271da06c951ee759468cc338c84ece8c9534b6c815521077"} err="failed to get container status \"d8c5f5761a3e2976271da06c951ee759468cc338c84ece8c9534b6c815521077\": rpc error: code = NotFound desc = could not find container \"d8c5f5761a3e2976271da06c951ee759468cc338c84ece8c9534b6c815521077\": container with ID starting with d8c5f5761a3e2976271da06c951ee759468cc338c84ece8c9534b6c815521077 not found: ID does not exist" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.598023 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-r4vv5"] Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.606996 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-x4zt9"] Dec 02 10:32:29 crc kubenswrapper[4711]: E1202 10:32:29.607420 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" containerName="init" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.607444 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" containerName="init" Dec 02 10:32:29 crc kubenswrapper[4711]: E1202 10:32:29.607480 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" containerName="dnsmasq-dns" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.607488 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" containerName="dnsmasq-dns" Dec 02 10:32:29 crc kubenswrapper[4711]: E1202 10:32:29.607524 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ce0662-b1aa-405b-a577-ebfd14385735" containerName="keystone-db-sync" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.607534 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ce0662-b1aa-405b-a577-ebfd14385735" containerName="keystone-db-sync" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.607720 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="12ce0662-b1aa-405b-a577-ebfd14385735" containerName="keystone-db-sync" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.607752 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" containerName="dnsmasq-dns" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.610670 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.612986 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.613279 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.614991 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.615085 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.615349 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6k6c6" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.638477 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x4zt9"] Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.655112 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7qxx8"] Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.656810 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.690297 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7qxx8"] Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.745634 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.745903 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.746033 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-config\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.746148 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-combined-ca-bundle\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.746230 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-credential-keys\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.746307 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.746403 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.746499 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-scripts\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.746576 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-fernet-keys\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.746655 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zhx6\" (UniqueName: \"kubernetes.io/projected/4598fa72-ccf7-4de9-9cef-14c227650911-kube-api-access-6zhx6\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.746751 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-config-data\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.746822 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpt2x\" (UniqueName: \"kubernetes.io/projected/163592a2-c106-47a3-a114-d163861dde5b-kube-api-access-tpt2x\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.847800 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.847861 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.847899 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-scripts\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.847916 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-fernet-keys\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.847941 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zhx6\" (UniqueName: \"kubernetes.io/projected/4598fa72-ccf7-4de9-9cef-14c227650911-kube-api-access-6zhx6\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.847993 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-config-data\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.848010 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpt2x\" (UniqueName: \"kubernetes.io/projected/163592a2-c106-47a3-a114-d163861dde5b-kube-api-access-tpt2x\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.848042 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.848064 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.848081 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-config\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.848110 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-combined-ca-bundle\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.848130 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-credential-keys\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.851195 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.851721 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.852281 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-config\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.852778 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.853300 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.853922 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-config-data\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.854400 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-scripts\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.855299 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-fernet-keys\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.856556 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-combined-ca-bundle\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.865549 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-credential-keys\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.866802 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7855f9b6bf-w42l8"] Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.868108 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.874434 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.874888 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.879547 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-4w49j" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.887939 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zhx6\" (UniqueName: \"kubernetes.io/projected/4598fa72-ccf7-4de9-9cef-14c227650911-kube-api-access-6zhx6\") pod \"dnsmasq-dns-bbf5cc879-7qxx8\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.889912 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.896908 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpt2x\" (UniqueName: \"kubernetes.io/projected/163592a2-c106-47a3-a114-d163861dde5b-kube-api-access-tpt2x\") pod \"keystone-bootstrap-x4zt9\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.909781 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7855f9b6bf-w42l8"] Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.934343 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.962456 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.964639 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.984316 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:32:29 crc kubenswrapper[4711]: I1202 10:32:29.989204 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.027267 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.043028 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-hhnhk"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.044092 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.047596 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cpm6m" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.047765 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.051687 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.052328 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.053795 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-rllxq"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.056086 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rllxq" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.057539 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2d7602c-dae1-4110-b8db-aa51a0761754-scripts\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.065059 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hhnhk"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.065637 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.068677 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.068971 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hrxtj" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.065751 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.069392 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.069595 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-config-data\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.069754 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5rrj\" (UniqueName: \"kubernetes.io/projected/e2d7602c-dae1-4110-b8db-aa51a0761754-kube-api-access-m5rrj\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.069931 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2krl\" (UniqueName: \"kubernetes.io/projected/98dbf68a-a027-4b09-a124-5438406d4b4f-kube-api-access-h2krl\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.095793 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e2d7602c-dae1-4110-b8db-aa51a0761754-horizon-secret-key\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.097291 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2d7602c-dae1-4110-b8db-aa51a0761754-config-data\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.097481 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-scripts\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.097634 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d7602c-dae1-4110-b8db-aa51a0761754-logs\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.097803 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98dbf68a-a027-4b09-a124-5438406d4b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.097914 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98dbf68a-a027-4b09-a124-5438406d4b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.146082 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rllxq"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.191492 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7qxx8"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.201919 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-scripts\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.201981 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98dbf68a-a027-4b09-a124-5438406d4b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202001 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98dbf68a-a027-4b09-a124-5438406d4b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202044 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2d7602c-dae1-4110-b8db-aa51a0761754-scripts\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202075 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44e7dd62-8534-48ac-9b10-3cafac8b1192-config\") pod \"neutron-db-sync-rllxq\" (UID: \"44e7dd62-8534-48ac-9b10-3cafac8b1192\") " pod="openstack/neutron-db-sync-rllxq" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202096 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202142 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202186 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf2tt\" (UniqueName: \"kubernetes.io/projected/c36d7741-4744-4076-ad79-2cd1aca48cec-kube-api-access-jf2tt\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202212 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-config-data\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202233 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-combined-ca-bundle\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202252 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5rrj\" (UniqueName: \"kubernetes.io/projected/e2d7602c-dae1-4110-b8db-aa51a0761754-kube-api-access-m5rrj\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202269 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c36d7741-4744-4076-ad79-2cd1aca48cec-etc-machine-id\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202287 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvpnc\" (UniqueName: \"kubernetes.io/projected/44e7dd62-8534-48ac-9b10-3cafac8b1192-kube-api-access-mvpnc\") pod \"neutron-db-sync-rllxq\" (UID: \"44e7dd62-8534-48ac-9b10-3cafac8b1192\") " pod="openstack/neutron-db-sync-rllxq" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202338 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2krl\" (UniqueName: \"kubernetes.io/projected/98dbf68a-a027-4b09-a124-5438406d4b4f-kube-api-access-h2krl\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202357 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e2d7602c-dae1-4110-b8db-aa51a0761754-horizon-secret-key\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202384 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2d7602c-dae1-4110-b8db-aa51a0761754-config-data\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202418 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-config-data\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202435 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-scripts\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202470 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d7602c-dae1-4110-b8db-aa51a0761754-logs\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202499 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-db-sync-config-data\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.202524 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7dd62-8534-48ac-9b10-3cafac8b1192-combined-ca-bundle\") pod \"neutron-db-sync-rllxq\" (UID: \"44e7dd62-8534-48ac-9b10-3cafac8b1192\") " pod="openstack/neutron-db-sync-rllxq" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.203596 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98dbf68a-a027-4b09-a124-5438406d4b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.205749 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98dbf68a-a027-4b09-a124-5438406d4b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.207141 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2d7602c-dae1-4110-b8db-aa51a0761754-scripts\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.212674 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d7602c-dae1-4110-b8db-aa51a0761754-logs\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.214923 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2d7602c-dae1-4110-b8db-aa51a0761754-config-data\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.225546 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.226034 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e2d7602c-dae1-4110-b8db-aa51a0761754-horizon-secret-key\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.226708 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.228632 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-config-data\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.230666 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-scripts\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.230685 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xfj2j"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.233059 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.239786 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dp74j" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.240068 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.248810 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5rrj\" (UniqueName: \"kubernetes.io/projected/e2d7602c-dae1-4110-b8db-aa51a0761754-kube-api-access-m5rrj\") pod \"horizon-7855f9b6bf-w42l8\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.255618 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ss597"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.280872 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2krl\" (UniqueName: \"kubernetes.io/projected/98dbf68a-a027-4b09-a124-5438406d4b4f-kube-api-access-h2krl\") pod \"ceilometer-0\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.282057 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.292635 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.292944 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.297205 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sdcct" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.298764 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xfj2j"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.316282 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-config-data\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.316619 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-combined-ca-bundle\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.316724 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/708582b5-ed1b-43e9-959a-482979700291-db-sync-config-data\") pod \"barbican-db-sync-xfj2j\" (UID: \"708582b5-ed1b-43e9-959a-482979700291\") " pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.316834 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44e7dd62-8534-48ac-9b10-3cafac8b1192-config\") pod \"neutron-db-sync-rllxq\" (UID: \"44e7dd62-8534-48ac-9b10-3cafac8b1192\") " pod="openstack/neutron-db-sync-rllxq" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.317007 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4t96\" (UniqueName: \"kubernetes.io/projected/708582b5-ed1b-43e9-959a-482979700291-kube-api-access-z4t96\") pod \"barbican-db-sync-xfj2j\" (UID: \"708582b5-ed1b-43e9-959a-482979700291\") " pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.317262 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf2tt\" (UniqueName: \"kubernetes.io/projected/c36d7741-4744-4076-ad79-2cd1aca48cec-kube-api-access-jf2tt\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.317378 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9lf\" (UniqueName: \"kubernetes.io/projected/426ff483-f882-4d91-b5da-bab147d2886d-kube-api-access-jb9lf\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.317468 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-combined-ca-bundle\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.317562 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426ff483-f882-4d91-b5da-bab147d2886d-logs\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.317643 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c36d7741-4744-4076-ad79-2cd1aca48cec-etc-machine-id\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.317738 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvpnc\" (UniqueName: \"kubernetes.io/projected/44e7dd62-8534-48ac-9b10-3cafac8b1192-kube-api-access-mvpnc\") pod \"neutron-db-sync-rllxq\" (UID: \"44e7dd62-8534-48ac-9b10-3cafac8b1192\") " pod="openstack/neutron-db-sync-rllxq" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.317834 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708582b5-ed1b-43e9-959a-482979700291-combined-ca-bundle\") pod \"barbican-db-sync-xfj2j\" (UID: \"708582b5-ed1b-43e9-959a-482979700291\") " pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.317999 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-config-data\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.318151 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-db-sync-config-data\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.318240 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7dd62-8534-48ac-9b10-3cafac8b1192-combined-ca-bundle\") pod \"neutron-db-sync-rllxq\" (UID: \"44e7dd62-8534-48ac-9b10-3cafac8b1192\") " pod="openstack/neutron-db-sync-rllxq" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.318339 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-scripts\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.318429 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-scripts\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.321222 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/44e7dd62-8534-48ac-9b10-3cafac8b1192-config\") pod \"neutron-db-sync-rllxq\" (UID: \"44e7dd62-8534-48ac-9b10-3cafac8b1192\") " pod="openstack/neutron-db-sync-rllxq" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.321386 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c36d7741-4744-4076-ad79-2cd1aca48cec-etc-machine-id\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.323262 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-scripts\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.328240 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7dd62-8534-48ac-9b10-3cafac8b1192-combined-ca-bundle\") pod \"neutron-db-sync-rllxq\" (UID: \"44e7dd62-8534-48ac-9b10-3cafac8b1192\") " pod="openstack/neutron-db-sync-rllxq" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.330911 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ss597"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.331534 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-db-sync-config-data\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.331670 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-config-data\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.334494 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-combined-ca-bundle\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.341329 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf2tt\" (UniqueName: \"kubernetes.io/projected/c36d7741-4744-4076-ad79-2cd1aca48cec-kube-api-access-jf2tt\") pod \"cinder-db-sync-hhnhk\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.344653 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" event={"ID":"aaa776cd-384c-4d18-9842-8be5867efb54","Type":"ContainerStarted","Data":"136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6"} Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.345102 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" podUID="aaa776cd-384c-4d18-9842-8be5867efb54" containerName="dnsmasq-dns" containerID="cri-o://136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6" gracePeriod=10 Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.345195 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.347329 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvpnc\" (UniqueName: \"kubernetes.io/projected/44e7dd62-8534-48ac-9b10-3cafac8b1192-kube-api-access-mvpnc\") pod \"neutron-db-sync-rllxq\" (UID: \"44e7dd62-8534-48ac-9b10-3cafac8b1192\") " pod="openstack/neutron-db-sync-rllxq" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.354656 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.370827 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.373254 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.375668 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.376845 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.377242 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qcf8c" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.377472 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.377597 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.400987 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rllxq" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.420510 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.420554 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.420619 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4t96\" (UniqueName: \"kubernetes.io/projected/708582b5-ed1b-43e9-959a-482979700291-kube-api-access-z4t96\") pod \"barbican-db-sync-xfj2j\" (UID: \"708582b5-ed1b-43e9-959a-482979700291\") " pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.420674 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb9lf\" (UniqueName: \"kubernetes.io/projected/426ff483-f882-4d91-b5da-bab147d2886d-kube-api-access-jb9lf\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.420698 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.420745 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426ff483-f882-4d91-b5da-bab147d2886d-logs\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.420774 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708582b5-ed1b-43e9-959a-482979700291-combined-ca-bundle\") pod \"barbican-db-sync-xfj2j\" (UID: \"708582b5-ed1b-43e9-959a-482979700291\") " pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.420871 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-logs\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.420915 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-scripts\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.420933 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.420980 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.421003 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.421024 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-config-data\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.421051 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k594c\" (UniqueName: \"kubernetes.io/projected/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-kube-api-access-k594c\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.421082 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-combined-ca-bundle\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.421097 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/708582b5-ed1b-43e9-959a-482979700291-db-sync-config-data\") pod \"barbican-db-sync-xfj2j\" (UID: \"708582b5-ed1b-43e9-959a-482979700291\") " pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.422303 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426ff483-f882-4d91-b5da-bab147d2886d-logs\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.427941 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-scripts\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.429550 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/708582b5-ed1b-43e9-959a-482979700291-db-sync-config-data\") pod \"barbican-db-sync-xfj2j\" (UID: \"708582b5-ed1b-43e9-959a-482979700291\") " pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.434740 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-combined-ca-bundle\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.437606 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708582b5-ed1b-43e9-959a-482979700291-combined-ca-bundle\") pod \"barbican-db-sync-xfj2j\" (UID: \"708582b5-ed1b-43e9-959a-482979700291\") " pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.438832 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7475865c97-ctdkx"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.440865 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.441939 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4t96\" (UniqueName: \"kubernetes.io/projected/708582b5-ed1b-43e9-959a-482979700291-kube-api-access-z4t96\") pod \"barbican-db-sync-xfj2j\" (UID: \"708582b5-ed1b-43e9-959a-482979700291\") " pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.447274 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-config-data\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.453675 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb9lf\" (UniqueName: \"kubernetes.io/projected/426ff483-f882-4d91-b5da-bab147d2886d-kube-api-access-jb9lf\") pod \"placement-db-sync-ss597\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.461466 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ndg5k"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.463047 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.486917 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ndg5k"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.512879 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.519524 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7475865c97-ctdkx"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522061 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522126 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1771ec89-03e5-4202-953a-7c745e18b7f1-config-data\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522192 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-logs\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522219 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522239 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522264 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522285 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-config\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522311 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522340 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k594c\" (UniqueName: \"kubernetes.io/projected/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-kube-api-access-k594c\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522376 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1771ec89-03e5-4202-953a-7c745e18b7f1-logs\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522407 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522427 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522470 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tkqp\" (UniqueName: \"kubernetes.io/projected/c26200d5-5908-40af-89de-c219091721b5-kube-api-access-7tkqp\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522493 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1771ec89-03e5-4202-953a-7c745e18b7f1-scripts\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522521 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1771ec89-03e5-4202-953a-7c745e18b7f1-horizon-secret-key\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522547 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522580 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqldj\" (UniqueName: \"kubernetes.io/projected/1771ec89-03e5-4202-953a-7c745e18b7f1-kube-api-access-jqldj\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522607 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.522643 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.524390 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.527323 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.530318 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-logs\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.545188 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.545971 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.547562 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.549246 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k594c\" (UniqueName: \"kubernetes.io/projected/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-kube-api-access-k594c\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.549816 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.554783 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" podStartSLOduration=3.554762396 podStartE2EDuration="3.554762396s" podCreationTimestamp="2025-12-02 10:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:32:30.382573566 +0000 UTC m=+1140.091940013" watchObservedRunningTime="2025-12-02 10:32:30.554762396 +0000 UTC m=+1140.264128833" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.554904 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.573905 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.585054 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.624495 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqldj\" (UniqueName: \"kubernetes.io/projected/1771ec89-03e5-4202-953a-7c745e18b7f1-kube-api-access-jqldj\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.624559 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.624592 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.624611 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.624637 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1771ec89-03e5-4202-953a-7c745e18b7f1-config-data\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.624685 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-config\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.624727 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.624763 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1771ec89-03e5-4202-953a-7c745e18b7f1-logs\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.624802 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tkqp\" (UniqueName: \"kubernetes.io/projected/c26200d5-5908-40af-89de-c219091721b5-kube-api-access-7tkqp\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.624821 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1771ec89-03e5-4202-953a-7c745e18b7f1-scripts\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.624838 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1771ec89-03e5-4202-953a-7c745e18b7f1-horizon-secret-key\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.626479 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.627312 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-config\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.627942 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.628815 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.628909 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1771ec89-03e5-4202-953a-7c745e18b7f1-logs\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.629244 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.632696 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1771ec89-03e5-4202-953a-7c745e18b7f1-horizon-secret-key\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.633278 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1771ec89-03e5-4202-953a-7c745e18b7f1-scripts\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.634174 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1771ec89-03e5-4202-953a-7c745e18b7f1-config-data\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.634567 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ss597" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.649598 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqldj\" (UniqueName: \"kubernetes.io/projected/1771ec89-03e5-4202-953a-7c745e18b7f1-kube-api-access-jqldj\") pod \"horizon-7475865c97-ctdkx\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.660330 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tkqp\" (UniqueName: \"kubernetes.io/projected/c26200d5-5908-40af-89de-c219091721b5-kube-api-access-7tkqp\") pod \"dnsmasq-dns-56df8fb6b7-ndg5k\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.670752 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x4zt9"] Dec 02 10:32:30 crc kubenswrapper[4711]: W1202 10:32:30.690054 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod163592a2_c106_47a3_a114_d163861dde5b.slice/crio-333acdb5fe919b1e4da196561f5bb0b636d2f6bb66f197a048440a8aee65af8e WatchSource:0}: Error finding container 333acdb5fe919b1e4da196561f5bb0b636d2f6bb66f197a048440a8aee65af8e: Status 404 returned error can't find the container with id 333acdb5fe919b1e4da196561f5bb0b636d2f6bb66f197a048440a8aee65af8e Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.693767 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.804569 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7qxx8"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.817237 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:30 crc kubenswrapper[4711]: W1202 10:32:30.818167 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4598fa72_ccf7_4de9_9cef_14c227650911.slice/crio-81cef7c70ef6038579536b2b624fe4bcc0a29c38d0004b160d4a8ab9cac21439 WatchSource:0}: Error finding container 81cef7c70ef6038579536b2b624fe4bcc0a29c38d0004b160d4a8ab9cac21439: Status 404 returned error can't find the container with id 81cef7c70ef6038579536b2b624fe4bcc0a29c38d0004b160d4a8ab9cac21439 Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.852863 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.975818 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.977381 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.980284 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 10:32:30 crc kubenswrapper[4711]: I1202 10:32:30.980575 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.026613 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.052893 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.117714 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a870e5d-69c1-4cc6-9e69-cdeaca82b49f" path="/var/lib/kubelet/pods/8a870e5d-69c1-4cc6-9e69-cdeaca82b49f/volumes" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.162902 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpszb\" (UniqueName: \"kubernetes.io/projected/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-kube-api-access-kpszb\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.163015 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.163104 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.163172 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.163447 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.163650 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.163740 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.163834 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.219125 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.265471 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-dns-swift-storage-0\") pod \"aaa776cd-384c-4d18-9842-8be5867efb54\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.265573 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgxd5\" (UniqueName: \"kubernetes.io/projected/aaa776cd-384c-4d18-9842-8be5867efb54-kube-api-access-bgxd5\") pod \"aaa776cd-384c-4d18-9842-8be5867efb54\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.265599 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-ovsdbserver-sb\") pod \"aaa776cd-384c-4d18-9842-8be5867efb54\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.265694 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-config\") pod \"aaa776cd-384c-4d18-9842-8be5867efb54\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.265726 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-dns-svc\") pod \"aaa776cd-384c-4d18-9842-8be5867efb54\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.265747 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-ovsdbserver-nb\") pod \"aaa776cd-384c-4d18-9842-8be5867efb54\" (UID: \"aaa776cd-384c-4d18-9842-8be5867efb54\") " Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.265971 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.266041 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.266066 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.266097 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.266129 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpszb\" (UniqueName: \"kubernetes.io/projected/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-kube-api-access-kpszb\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.266148 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.266180 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.266201 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.266538 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.272811 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.279966 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.280625 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.281503 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.287970 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.291101 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa776cd-384c-4d18-9842-8be5867efb54-kube-api-access-bgxd5" (OuterVolumeSpecName: "kube-api-access-bgxd5") pod "aaa776cd-384c-4d18-9842-8be5867efb54" (UID: "aaa776cd-384c-4d18-9842-8be5867efb54"). InnerVolumeSpecName "kube-api-access-bgxd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.291681 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.299917 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.304555 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.316324 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpszb\" (UniqueName: \"kubernetes.io/projected/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-kube-api-access-kpszb\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.352060 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.359071 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x4zt9" event={"ID":"163592a2-c106-47a3-a114-d163861dde5b","Type":"ContainerStarted","Data":"0ad12b4324fa1097c4b553e771c45bfcd6df7e3f89a17a2ecb7a4c0e8dc33f8a"} Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.359116 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x4zt9" event={"ID":"163592a2-c106-47a3-a114-d163861dde5b","Type":"ContainerStarted","Data":"333acdb5fe919b1e4da196561f5bb0b636d2f6bb66f197a048440a8aee65af8e"} Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.359391 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aaa776cd-384c-4d18-9842-8be5867efb54" (UID: "aaa776cd-384c-4d18-9842-8be5867efb54"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.362364 4711 generic.go:334] "Generic (PLEG): container finished" podID="aaa776cd-384c-4d18-9842-8be5867efb54" containerID="136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6" exitCode=0 Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.362419 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" event={"ID":"aaa776cd-384c-4d18-9842-8be5867efb54","Type":"ContainerDied","Data":"136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6"} Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.362445 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" event={"ID":"aaa776cd-384c-4d18-9842-8be5867efb54","Type":"ContainerDied","Data":"47f1cabb225f8921189f711a65908812d7c94494a1a913c10ce62308af309524"} Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.362463 4711 scope.go:117] "RemoveContainer" containerID="136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.362567 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-r4vv5" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.367753 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" event={"ID":"4598fa72-ccf7-4de9-9cef-14c227650911","Type":"ContainerStarted","Data":"a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65"} Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.367798 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" event={"ID":"4598fa72-ccf7-4de9-9cef-14c227650911","Type":"ContainerStarted","Data":"81cef7c70ef6038579536b2b624fe4bcc0a29c38d0004b160d4a8ab9cac21439"} Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.368200 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" podUID="4598fa72-ccf7-4de9-9cef-14c227650911" containerName="init" containerID="cri-o://a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65" gracePeriod=10 Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.374870 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98dbf68a-a027-4b09-a124-5438406d4b4f","Type":"ContainerStarted","Data":"964192f9500ff93cf7b375a2675820bea29178eeb891ebb6f56b17ca398d56a6"} Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.376514 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.376538 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgxd5\" (UniqueName: \"kubernetes.io/projected/aaa776cd-384c-4d18-9842-8be5867efb54-kube-api-access-bgxd5\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.378876 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aaa776cd-384c-4d18-9842-8be5867efb54" (UID: "aaa776cd-384c-4d18-9842-8be5867efb54"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.381638 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-x4zt9" podStartSLOduration=2.381614381 podStartE2EDuration="2.381614381s" podCreationTimestamp="2025-12-02 10:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:32:31.375180316 +0000 UTC m=+1141.084546763" watchObservedRunningTime="2025-12-02 10:32:31.381614381 +0000 UTC m=+1141.090980818" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.402437 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aaa776cd-384c-4d18-9842-8be5867efb54" (UID: "aaa776cd-384c-4d18-9842-8be5867efb54"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.402790 4711 scope.go:117] "RemoveContainer" containerID="7dad17f7f192b1f7ff4f8d4a32c63c10706297febe7eb3745c1790b81daebe86" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.426262 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-config" (OuterVolumeSpecName: "config") pod "aaa776cd-384c-4d18-9842-8be5867efb54" (UID: "aaa776cd-384c-4d18-9842-8be5867efb54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.438811 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aaa776cd-384c-4d18-9842-8be5867efb54" (UID: "aaa776cd-384c-4d18-9842-8be5867efb54"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.471158 4711 scope.go:117] "RemoveContainer" containerID="136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6" Dec 02 10:32:31 crc kubenswrapper[4711]: E1202 10:32:31.471884 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6\": container with ID starting with 136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6 not found: ID does not exist" containerID="136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.471920 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6"} err="failed to get container status \"136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6\": rpc error: code = NotFound desc = could not find container \"136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6\": container with ID starting with 136ab7bdc3bb401d104caba1184e8e1360f4b80ea54d0bc33b70b8224c4528e6 not found: ID does not exist" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.471944 4711 scope.go:117] "RemoveContainer" containerID="7dad17f7f192b1f7ff4f8d4a32c63c10706297febe7eb3745c1790b81daebe86" Dec 02 10:32:31 crc kubenswrapper[4711]: E1202 10:32:31.472428 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dad17f7f192b1f7ff4f8d4a32c63c10706297febe7eb3745c1790b81daebe86\": container with ID starting with 7dad17f7f192b1f7ff4f8d4a32c63c10706297febe7eb3745c1790b81daebe86 not found: ID does not exist" containerID="7dad17f7f192b1f7ff4f8d4a32c63c10706297febe7eb3745c1790b81daebe86" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.472447 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dad17f7f192b1f7ff4f8d4a32c63c10706297febe7eb3745c1790b81daebe86"} err="failed to get container status \"7dad17f7f192b1f7ff4f8d4a32c63c10706297febe7eb3745c1790b81daebe86\": rpc error: code = NotFound desc = could not find container \"7dad17f7f192b1f7ff4f8d4a32c63c10706297febe7eb3745c1790b81daebe86\": container with ID starting with 7dad17f7f192b1f7ff4f8d4a32c63c10706297febe7eb3745c1790b81daebe86 not found: ID does not exist" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.475531 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7855f9b6bf-w42l8"] Dec 02 10:32:31 crc kubenswrapper[4711]: W1202 10:32:31.488894 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2d7602c_dae1_4110_b8db_aa51a0761754.slice/crio-edd59065c7404e86bbd95cde40799367953fee0723f180ee466b06faa87a95f5 WatchSource:0}: Error finding container edd59065c7404e86bbd95cde40799367953fee0723f180ee466b06faa87a95f5: Status 404 returned error can't find the container with id edd59065c7404e86bbd95cde40799367953fee0723f180ee466b06faa87a95f5 Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.490634 4711 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.490659 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.490673 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.490684 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaa776cd-384c-4d18-9842-8be5867efb54-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.496663 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.506662 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ss597"] Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.534663 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hhnhk"] Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.543225 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xfj2j"] Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.555627 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rllxq"] Dec 02 10:32:31 crc kubenswrapper[4711]: W1202 10:32:31.559550 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod426ff483_f882_4d91_b5da_bab147d2886d.slice/crio-cf7d26984a1f0fe9b4c9d4a717aad644245adf3c113e8bee60aced83280d867f WatchSource:0}: Error finding container cf7d26984a1f0fe9b4c9d4a717aad644245adf3c113e8bee60aced83280d867f: Status 404 returned error can't find the container with id cf7d26984a1f0fe9b4c9d4a717aad644245adf3c113e8bee60aced83280d867f Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.603731 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ndg5k"] Dec 02 10:32:31 crc kubenswrapper[4711]: W1202 10:32:31.619640 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc26200d5_5908_40af_89de_c219091721b5.slice/crio-6ee340f7da0f9c6721092fd84e7571332e0cf882c799aa137f6b70293b40d180 WatchSource:0}: Error finding container 6ee340f7da0f9c6721092fd84e7571332e0cf882c799aa137f6b70293b40d180: Status 404 returned error can't find the container with id 6ee340f7da0f9c6721092fd84e7571332e0cf882c799aa137f6b70293b40d180 Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.626694 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.647282 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7475865c97-ctdkx"] Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.759172 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-r4vv5"] Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.768537 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-r4vv5"] Dec 02 10:32:31 crc kubenswrapper[4711]: I1202 10:32:31.812458 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:31.903053 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-ovsdbserver-nb\") pod \"4598fa72-ccf7-4de9-9cef-14c227650911\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:31.903085 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zhx6\" (UniqueName: \"kubernetes.io/projected/4598fa72-ccf7-4de9-9cef-14c227650911-kube-api-access-6zhx6\") pod \"4598fa72-ccf7-4de9-9cef-14c227650911\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:31.903142 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-ovsdbserver-sb\") pod \"4598fa72-ccf7-4de9-9cef-14c227650911\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:31.903178 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-config\") pod \"4598fa72-ccf7-4de9-9cef-14c227650911\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:31.903238 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-dns-swift-storage-0\") pod \"4598fa72-ccf7-4de9-9cef-14c227650911\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:31.903273 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-dns-svc\") pod \"4598fa72-ccf7-4de9-9cef-14c227650911\" (UID: \"4598fa72-ccf7-4de9-9cef-14c227650911\") " Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:31.911159 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4598fa72-ccf7-4de9-9cef-14c227650911-kube-api-access-6zhx6" (OuterVolumeSpecName: "kube-api-access-6zhx6") pod "4598fa72-ccf7-4de9-9cef-14c227650911" (UID: "4598fa72-ccf7-4de9-9cef-14c227650911"). InnerVolumeSpecName "kube-api-access-6zhx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:31.930481 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-config" (OuterVolumeSpecName: "config") pod "4598fa72-ccf7-4de9-9cef-14c227650911" (UID: "4598fa72-ccf7-4de9-9cef-14c227650911"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:31.930737 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4598fa72-ccf7-4de9-9cef-14c227650911" (UID: "4598fa72-ccf7-4de9-9cef-14c227650911"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:31.936157 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4598fa72-ccf7-4de9-9cef-14c227650911" (UID: "4598fa72-ccf7-4de9-9cef-14c227650911"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:31.939094 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4598fa72-ccf7-4de9-9cef-14c227650911" (UID: "4598fa72-ccf7-4de9-9cef-14c227650911"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:31.953767 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4598fa72-ccf7-4de9-9cef-14c227650911" (UID: "4598fa72-ccf7-4de9-9cef-14c227650911"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.006995 4711 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.007015 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.007026 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.007035 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zhx6\" (UniqueName: \"kubernetes.io/projected/4598fa72-ccf7-4de9-9cef-14c227650911-kube-api-access-6zhx6\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.007045 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.007053 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4598fa72-ccf7-4de9-9cef-14c227650911-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.111926 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7855f9b6bf-w42l8"] Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.141226 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.165365 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f6bd6db97-7jhr9"] Dec 02 10:32:32 crc kubenswrapper[4711]: E1202 10:32:32.165831 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa776cd-384c-4d18-9842-8be5867efb54" containerName="dnsmasq-dns" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.165843 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa776cd-384c-4d18-9842-8be5867efb54" containerName="dnsmasq-dns" Dec 02 10:32:32 crc kubenswrapper[4711]: E1202 10:32:32.165859 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4598fa72-ccf7-4de9-9cef-14c227650911" containerName="init" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.165864 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="4598fa72-ccf7-4de9-9cef-14c227650911" containerName="init" Dec 02 10:32:32 crc kubenswrapper[4711]: E1202 10:32:32.165880 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa776cd-384c-4d18-9842-8be5867efb54" containerName="init" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.165887 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa776cd-384c-4d18-9842-8be5867efb54" containerName="init" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.166077 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa776cd-384c-4d18-9842-8be5867efb54" containerName="dnsmasq-dns" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.166093 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="4598fa72-ccf7-4de9-9cef-14c227650911" containerName="init" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.167051 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.182400 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6bd6db97-7jhr9"] Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.210796 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63935e2a-d282-4b50-b62b-d89c82a6ef1f-logs\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.210831 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smjm9\" (UniqueName: \"kubernetes.io/projected/63935e2a-d282-4b50-b62b-d89c82a6ef1f-kube-api-access-smjm9\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.210878 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63935e2a-d282-4b50-b62b-d89c82a6ef1f-config-data\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.210970 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63935e2a-d282-4b50-b62b-d89c82a6ef1f-horizon-secret-key\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.211011 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63935e2a-d282-4b50-b62b-d89c82a6ef1f-scripts\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.308634 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.320752 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63935e2a-d282-4b50-b62b-d89c82a6ef1f-config-data\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.321036 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63935e2a-d282-4b50-b62b-d89c82a6ef1f-horizon-secret-key\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.321133 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63935e2a-d282-4b50-b62b-d89c82a6ef1f-scripts\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.321200 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63935e2a-d282-4b50-b62b-d89c82a6ef1f-logs\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.321226 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smjm9\" (UniqueName: \"kubernetes.io/projected/63935e2a-d282-4b50-b62b-d89c82a6ef1f-kube-api-access-smjm9\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.322753 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63935e2a-d282-4b50-b62b-d89c82a6ef1f-logs\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.325872 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63935e2a-d282-4b50-b62b-d89c82a6ef1f-scripts\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.337513 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63935e2a-d282-4b50-b62b-d89c82a6ef1f-horizon-secret-key\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.343646 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smjm9\" (UniqueName: \"kubernetes.io/projected/63935e2a-d282-4b50-b62b-d89c82a6ef1f-kube-api-access-smjm9\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.347048 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63935e2a-d282-4b50-b62b-d89c82a6ef1f-config-data\") pod \"horizon-6f6bd6db97-7jhr9\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.411810 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.421460 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xfj2j" event={"ID":"708582b5-ed1b-43e9-959a-482979700291","Type":"ContainerStarted","Data":"fb400266108c018ccbee54424abae20f425777c13b86a4befc42a1d31c4c06fd"} Dec 02 10:32:32 crc kubenswrapper[4711]: W1202 10:32:32.422280 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3df96cab_0615_4b61_b0e5_726a6ef5b6e1.slice/crio-9a65ff7860122a870891c572ee2426fe71a860f330da5d12a9654cb18f765733 WatchSource:0}: Error finding container 9a65ff7860122a870891c572ee2426fe71a860f330da5d12a9654cb18f765733: Status 404 returned error can't find the container with id 9a65ff7860122a870891c572ee2426fe71a860f330da5d12a9654cb18f765733 Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.426019 4711 generic.go:334] "Generic (PLEG): container finished" podID="4598fa72-ccf7-4de9-9cef-14c227650911" containerID="a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65" exitCode=0 Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.426106 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" event={"ID":"4598fa72-ccf7-4de9-9cef-14c227650911","Type":"ContainerDied","Data":"a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65"} Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.426151 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" event={"ID":"4598fa72-ccf7-4de9-9cef-14c227650911","Type":"ContainerDied","Data":"81cef7c70ef6038579536b2b624fe4bcc0a29c38d0004b160d4a8ab9cac21439"} Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.426170 4711 scope.go:117] "RemoveContainer" containerID="a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.426349 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-7qxx8" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.456540 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rllxq" event={"ID":"44e7dd62-8534-48ac-9b10-3cafac8b1192","Type":"ContainerStarted","Data":"eb9a6a931a0188ebc90a981ea3b3bf17f21c9adb22b783edc2f2835b2647d9b8"} Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.463662 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ss597" event={"ID":"426ff483-f882-4d91-b5da-bab147d2886d","Type":"ContainerStarted","Data":"cf7d26984a1f0fe9b4c9d4a717aad644245adf3c113e8bee60aced83280d867f"} Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.464829 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" event={"ID":"c26200d5-5908-40af-89de-c219091721b5","Type":"ContainerStarted","Data":"6ee340f7da0f9c6721092fd84e7571332e0cf882c799aa137f6b70293b40d180"} Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.467766 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7475865c97-ctdkx" event={"ID":"1771ec89-03e5-4202-953a-7c745e18b7f1","Type":"ContainerStarted","Data":"c2e3bff098ead1a68187e0150a5f2eeae79a21be00c803d18f6797cdcdd1bca8"} Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.474202 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hhnhk" event={"ID":"c36d7741-4744-4076-ad79-2cd1aca48cec","Type":"ContainerStarted","Data":"e26f446bcb04b3f808f0b4afcc70bb30054b40769c5cb61b264416dc8a1d7344"} Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.489183 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7855f9b6bf-w42l8" event={"ID":"e2d7602c-dae1-4110-b8db-aa51a0761754","Type":"ContainerStarted","Data":"edd59065c7404e86bbd95cde40799367953fee0723f180ee466b06faa87a95f5"} Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.495138 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7","Type":"ContainerStarted","Data":"cece38de4fb1dcee0d7e9ad01c0f944886e0a451f488d6de775be84dd845bc78"} Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.510320 4711 scope.go:117] "RemoveContainer" containerID="a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65" Dec 02 10:32:32 crc kubenswrapper[4711]: E1202 10:32:32.510747 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65\": container with ID starting with a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65 not found: ID does not exist" containerID="a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.510783 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65"} err="failed to get container status \"a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65\": rpc error: code = NotFound desc = could not find container \"a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65\": container with ID starting with a80a2c040f8403d3d512b092f7c20b0660f6589dcfbf90cdf16129635367dd65 not found: ID does not exist" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.529615 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.611447 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7qxx8"] Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.620829 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7qxx8"] Dec 02 10:32:32 crc kubenswrapper[4711]: I1202 10:32:32.977309 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:32:33 crc kubenswrapper[4711]: I1202 10:32:33.112280 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4598fa72-ccf7-4de9-9cef-14c227650911" path="/var/lib/kubelet/pods/4598fa72-ccf7-4de9-9cef-14c227650911/volumes" Dec 02 10:32:33 crc kubenswrapper[4711]: I1202 10:32:33.112923 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa776cd-384c-4d18-9842-8be5867efb54" path="/var/lib/kubelet/pods/aaa776cd-384c-4d18-9842-8be5867efb54/volumes" Dec 02 10:32:33 crc kubenswrapper[4711]: I1202 10:32:33.291667 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6bd6db97-7jhr9"] Dec 02 10:32:33 crc kubenswrapper[4711]: W1202 10:32:33.315570 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63935e2a_d282_4b50_b62b_d89c82a6ef1f.slice/crio-32ec00a0defb81147dedd9144a060c7107a2f40c9ea2e59f3550e2e3d7a233f9 WatchSource:0}: Error finding container 32ec00a0defb81147dedd9144a060c7107a2f40c9ea2e59f3550e2e3d7a233f9: Status 404 returned error can't find the container with id 32ec00a0defb81147dedd9144a060c7107a2f40c9ea2e59f3550e2e3d7a233f9 Dec 02 10:32:33 crc kubenswrapper[4711]: I1202 10:32:33.509175 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6bd6db97-7jhr9" event={"ID":"63935e2a-d282-4b50-b62b-d89c82a6ef1f","Type":"ContainerStarted","Data":"32ec00a0defb81147dedd9144a060c7107a2f40c9ea2e59f3550e2e3d7a233f9"} Dec 02 10:32:33 crc kubenswrapper[4711]: I1202 10:32:33.512898 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3df96cab-0615-4b61-b0e5-726a6ef5b6e1","Type":"ContainerStarted","Data":"65342d7dbe7fd7c46edffd08d081be8fd6d2bb22736216cf809b8e547a7e4f50"} Dec 02 10:32:33 crc kubenswrapper[4711]: I1202 10:32:33.512933 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3df96cab-0615-4b61-b0e5-726a6ef5b6e1","Type":"ContainerStarted","Data":"9a65ff7860122a870891c572ee2426fe71a860f330da5d12a9654cb18f765733"} Dec 02 10:32:33 crc kubenswrapper[4711]: I1202 10:32:33.516556 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rllxq" event={"ID":"44e7dd62-8534-48ac-9b10-3cafac8b1192","Type":"ContainerStarted","Data":"5447f2d26caf26f18e97fa62c49a031feb64bd956fbad5d280640e5eb96be6bf"} Dec 02 10:32:33 crc kubenswrapper[4711]: I1202 10:32:33.520386 4711 generic.go:334] "Generic (PLEG): container finished" podID="c26200d5-5908-40af-89de-c219091721b5" containerID="bb3bf0f965ee0585c48653379d175351f07404f88bf00028c808bc47fe0ac08f" exitCode=0 Dec 02 10:32:33 crc kubenswrapper[4711]: I1202 10:32:33.520490 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" event={"ID":"c26200d5-5908-40af-89de-c219091721b5","Type":"ContainerDied","Data":"bb3bf0f965ee0585c48653379d175351f07404f88bf00028c808bc47fe0ac08f"} Dec 02 10:32:33 crc kubenswrapper[4711]: I1202 10:32:33.528002 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7","Type":"ContainerStarted","Data":"537fcbb32dc8c66417a2c8610fd60bc7d5bd8dd6ed131363896134067937d9b1"} Dec 02 10:32:33 crc kubenswrapper[4711]: I1202 10:32:33.540768 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-rllxq" podStartSLOduration=4.540743208 podStartE2EDuration="4.540743208s" podCreationTimestamp="2025-12-02 10:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:32:33.532821103 +0000 UTC m=+1143.242187570" watchObservedRunningTime="2025-12-02 10:32:33.540743208 +0000 UTC m=+1143.250109655" Dec 02 10:32:34 crc kubenswrapper[4711]: I1202 10:32:34.566985 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7","Type":"ContainerStarted","Data":"2f5b6a5ceeb8ff8ad215a77749c3a0bb7947b7c3d6c440ba507c747a05d87e51"} Dec 02 10:32:34 crc kubenswrapper[4711]: I1202 10:32:34.568038 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" containerName="glance-log" containerID="cri-o://537fcbb32dc8c66417a2c8610fd60bc7d5bd8dd6ed131363896134067937d9b1" gracePeriod=30 Dec 02 10:32:34 crc kubenswrapper[4711]: I1202 10:32:34.568984 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" containerName="glance-httpd" containerID="cri-o://2f5b6a5ceeb8ff8ad215a77749c3a0bb7947b7c3d6c440ba507c747a05d87e51" gracePeriod=30 Dec 02 10:32:34 crc kubenswrapper[4711]: I1202 10:32:34.587602 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3df96cab-0615-4b61-b0e5-726a6ef5b6e1","Type":"ContainerStarted","Data":"f403fa777c2386a53d1038c3ca222af76a8016dd57e7d67f007746b98025b722"} Dec 02 10:32:34 crc kubenswrapper[4711]: I1202 10:32:34.587609 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3df96cab-0615-4b61-b0e5-726a6ef5b6e1" containerName="glance-log" containerID="cri-o://65342d7dbe7fd7c46edffd08d081be8fd6d2bb22736216cf809b8e547a7e4f50" gracePeriod=30 Dec 02 10:32:34 crc kubenswrapper[4711]: I1202 10:32:34.589281 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3df96cab-0615-4b61-b0e5-726a6ef5b6e1" containerName="glance-httpd" containerID="cri-o://f403fa777c2386a53d1038c3ca222af76a8016dd57e7d67f007746b98025b722" gracePeriod=30 Dec 02 10:32:34 crc kubenswrapper[4711]: I1202 10:32:34.594047 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" event={"ID":"c26200d5-5908-40af-89de-c219091721b5","Type":"ContainerStarted","Data":"ceea0da7b494d6c75f0d5db008d711e7e45f9477875d3c9ddfab2c116ab22750"} Dec 02 10:32:34 crc kubenswrapper[4711]: I1202 10:32:34.594149 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:34 crc kubenswrapper[4711]: I1202 10:32:34.614649 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.614621867 podStartE2EDuration="4.614621867s" podCreationTimestamp="2025-12-02 10:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:32:34.609439696 +0000 UTC m=+1144.318806153" watchObservedRunningTime="2025-12-02 10:32:34.614621867 +0000 UTC m=+1144.323988314" Dec 02 10:32:34 crc kubenswrapper[4711]: I1202 10:32:34.648313 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" podStartSLOduration=4.648295593 podStartE2EDuration="4.648295593s" podCreationTimestamp="2025-12-02 10:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:32:34.645809785 +0000 UTC m=+1144.355176232" watchObservedRunningTime="2025-12-02 10:32:34.648295593 +0000 UTC m=+1144.357662040" Dec 02 10:32:34 crc kubenswrapper[4711]: I1202 10:32:34.652140 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.652118196 podStartE2EDuration="5.652118196s" podCreationTimestamp="2025-12-02 10:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:32:34.628297589 +0000 UTC m=+1144.337664036" watchObservedRunningTime="2025-12-02 10:32:34.652118196 +0000 UTC m=+1144.361484643" Dec 02 10:32:35 crc kubenswrapper[4711]: I1202 10:32:35.600619 4711 generic.go:334] "Generic (PLEG): container finished" podID="163592a2-c106-47a3-a114-d163861dde5b" containerID="0ad12b4324fa1097c4b553e771c45bfcd6df7e3f89a17a2ecb7a4c0e8dc33f8a" exitCode=0 Dec 02 10:32:35 crc kubenswrapper[4711]: I1202 10:32:35.600715 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x4zt9" event={"ID":"163592a2-c106-47a3-a114-d163861dde5b","Type":"ContainerDied","Data":"0ad12b4324fa1097c4b553e771c45bfcd6df7e3f89a17a2ecb7a4c0e8dc33f8a"} Dec 02 10:32:35 crc kubenswrapper[4711]: I1202 10:32:35.604235 4711 generic.go:334] "Generic (PLEG): container finished" podID="f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" containerID="2f5b6a5ceeb8ff8ad215a77749c3a0bb7947b7c3d6c440ba507c747a05d87e51" exitCode=0 Dec 02 10:32:35 crc kubenswrapper[4711]: I1202 10:32:35.604261 4711 generic.go:334] "Generic (PLEG): container finished" podID="f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" containerID="537fcbb32dc8c66417a2c8610fd60bc7d5bd8dd6ed131363896134067937d9b1" exitCode=143 Dec 02 10:32:35 crc kubenswrapper[4711]: I1202 10:32:35.604310 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7","Type":"ContainerDied","Data":"2f5b6a5ceeb8ff8ad215a77749c3a0bb7947b7c3d6c440ba507c747a05d87e51"} Dec 02 10:32:35 crc kubenswrapper[4711]: I1202 10:32:35.604336 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7","Type":"ContainerDied","Data":"537fcbb32dc8c66417a2c8610fd60bc7d5bd8dd6ed131363896134067937d9b1"} Dec 02 10:32:35 crc kubenswrapper[4711]: I1202 10:32:35.606904 4711 generic.go:334] "Generic (PLEG): container finished" podID="3df96cab-0615-4b61-b0e5-726a6ef5b6e1" containerID="f403fa777c2386a53d1038c3ca222af76a8016dd57e7d67f007746b98025b722" exitCode=0 Dec 02 10:32:35 crc kubenswrapper[4711]: I1202 10:32:35.606931 4711 generic.go:334] "Generic (PLEG): container finished" podID="3df96cab-0615-4b61-b0e5-726a6ef5b6e1" containerID="65342d7dbe7fd7c46edffd08d081be8fd6d2bb22736216cf809b8e547a7e4f50" exitCode=143 Dec 02 10:32:35 crc kubenswrapper[4711]: I1202 10:32:35.607875 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3df96cab-0615-4b61-b0e5-726a6ef5b6e1","Type":"ContainerDied","Data":"f403fa777c2386a53d1038c3ca222af76a8016dd57e7d67f007746b98025b722"} Dec 02 10:32:35 crc kubenswrapper[4711]: I1202 10:32:35.607905 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3df96cab-0615-4b61-b0e5-726a6ef5b6e1","Type":"ContainerDied","Data":"65342d7dbe7fd7c46edffd08d081be8fd6d2bb22736216cf809b8e547a7e4f50"} Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.286720 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.447806 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-scripts\") pod \"163592a2-c106-47a3-a114-d163861dde5b\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.448039 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-credential-keys\") pod \"163592a2-c106-47a3-a114-d163861dde5b\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.448093 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-combined-ca-bundle\") pod \"163592a2-c106-47a3-a114-d163861dde5b\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.448125 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-config-data\") pod \"163592a2-c106-47a3-a114-d163861dde5b\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.448165 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-fernet-keys\") pod \"163592a2-c106-47a3-a114-d163861dde5b\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.448190 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpt2x\" (UniqueName: \"kubernetes.io/projected/163592a2-c106-47a3-a114-d163861dde5b-kube-api-access-tpt2x\") pod \"163592a2-c106-47a3-a114-d163861dde5b\" (UID: \"163592a2-c106-47a3-a114-d163861dde5b\") " Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.468580 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "163592a2-c106-47a3-a114-d163861dde5b" (UID: "163592a2-c106-47a3-a114-d163861dde5b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.469431 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "163592a2-c106-47a3-a114-d163861dde5b" (UID: "163592a2-c106-47a3-a114-d163861dde5b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.472514 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163592a2-c106-47a3-a114-d163861dde5b-kube-api-access-tpt2x" (OuterVolumeSpecName: "kube-api-access-tpt2x") pod "163592a2-c106-47a3-a114-d163861dde5b" (UID: "163592a2-c106-47a3-a114-d163861dde5b"). InnerVolumeSpecName "kube-api-access-tpt2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.474930 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-scripts" (OuterVolumeSpecName: "scripts") pod "163592a2-c106-47a3-a114-d163861dde5b" (UID: "163592a2-c106-47a3-a114-d163861dde5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.480827 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "163592a2-c106-47a3-a114-d163861dde5b" (UID: "163592a2-c106-47a3-a114-d163861dde5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.496337 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-config-data" (OuterVolumeSpecName: "config-data") pod "163592a2-c106-47a3-a114-d163861dde5b" (UID: "163592a2-c106-47a3-a114-d163861dde5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.550693 4711 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.550727 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.550738 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.550746 4711 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.550755 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpt2x\" (UniqueName: \"kubernetes.io/projected/163592a2-c106-47a3-a114-d163861dde5b-kube-api-access-tpt2x\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.550767 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163592a2-c106-47a3-a114-d163861dde5b-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.647148 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x4zt9" Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.646944 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x4zt9" event={"ID":"163592a2-c106-47a3-a114-d163861dde5b","Type":"ContainerDied","Data":"333acdb5fe919b1e4da196561f5bb0b636d2f6bb66f197a048440a8aee65af8e"} Dec 02 10:32:38 crc kubenswrapper[4711]: I1202 10:32:38.647703 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="333acdb5fe919b1e4da196561f5bb0b636d2f6bb66f197a048440a8aee65af8e" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.371012 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7475865c97-ctdkx"] Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.414562 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76998c6f5b-xhr78"] Dec 02 10:32:39 crc kubenswrapper[4711]: E1202 10:32:39.415339 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163592a2-c106-47a3-a114-d163861dde5b" containerName="keystone-bootstrap" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.415526 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="163592a2-c106-47a3-a114-d163861dde5b" containerName="keystone-bootstrap" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.415846 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="163592a2-c106-47a3-a114-d163861dde5b" containerName="keystone-bootstrap" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.417749 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.423640 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.431264 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76998c6f5b-xhr78"] Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.482456 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-x4zt9"] Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.491838 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4llp2\" (UniqueName: \"kubernetes.io/projected/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-kube-api-access-4llp2\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.491883 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-horizon-secret-key\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.491905 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-horizon-tls-certs\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.491942 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-config-data\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.492274 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-logs\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.492363 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-combined-ca-bundle\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.492490 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-scripts\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.507610 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-x4zt9"] Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.526317 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6bd6db97-7jhr9"] Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.533771 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b4d9565bd-5nwjn"] Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.535287 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.550554 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b4d9565bd-5nwjn"] Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.593797 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-horizon-secret-key\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.593853 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-horizon-tls-certs\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.593884 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-config-data\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.593926 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5e4731d-0cea-4530-aba2-86777a8db6cb-horizon-tls-certs\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.593986 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e4731d-0cea-4530-aba2-86777a8db6cb-logs\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.594024 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-logs\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.594062 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5e4731d-0cea-4530-aba2-86777a8db6cb-config-data\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.594085 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-combined-ca-bundle\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.594138 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-scripts\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.594167 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5e4731d-0cea-4530-aba2-86777a8db6cb-scripts\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.594208 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a5e4731d-0cea-4530-aba2-86777a8db6cb-horizon-secret-key\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.594230 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e4731d-0cea-4530-aba2-86777a8db6cb-combined-ca-bundle\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.594249 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5gmk\" (UniqueName: \"kubernetes.io/projected/a5e4731d-0cea-4530-aba2-86777a8db6cb-kube-api-access-p5gmk\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.594301 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4llp2\" (UniqueName: \"kubernetes.io/projected/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-kube-api-access-4llp2\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.598868 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-config-data\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.599363 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-scripts\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.599649 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-logs\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.621831 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-combined-ca-bundle\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.627359 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-horizon-secret-key\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.634508 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-horizon-tls-certs\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.637063 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pk6jj"] Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.638541 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.638647 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4llp2\" (UniqueName: \"kubernetes.io/projected/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-kube-api-access-4llp2\") pod \"horizon-76998c6f5b-xhr78\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.642363 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.642543 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.643105 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.643261 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6k6c6" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.644632 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pk6jj"] Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.646171 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.697968 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5e4731d-0cea-4530-aba2-86777a8db6cb-config-data\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.698085 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5e4731d-0cea-4530-aba2-86777a8db6cb-scripts\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.698127 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a5e4731d-0cea-4530-aba2-86777a8db6cb-horizon-secret-key\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.698152 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e4731d-0cea-4530-aba2-86777a8db6cb-combined-ca-bundle\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.698175 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5gmk\" (UniqueName: \"kubernetes.io/projected/a5e4731d-0cea-4530-aba2-86777a8db6cb-kube-api-access-p5gmk\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.698238 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5e4731d-0cea-4530-aba2-86777a8db6cb-horizon-tls-certs\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.698276 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e4731d-0cea-4530-aba2-86777a8db6cb-logs\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.698662 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e4731d-0cea-4530-aba2-86777a8db6cb-logs\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.699708 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5e4731d-0cea-4530-aba2-86777a8db6cb-scripts\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.703891 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5e4731d-0cea-4530-aba2-86777a8db6cb-config-data\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.716587 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e4731d-0cea-4530-aba2-86777a8db6cb-combined-ca-bundle\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.724383 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a5e4731d-0cea-4530-aba2-86777a8db6cb-horizon-secret-key\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.732446 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5e4731d-0cea-4530-aba2-86777a8db6cb-horizon-tls-certs\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.741725 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5gmk\" (UniqueName: \"kubernetes.io/projected/a5e4731d-0cea-4530-aba2-86777a8db6cb-kube-api-access-p5gmk\") pod \"horizon-6b4d9565bd-5nwjn\" (UID: \"a5e4731d-0cea-4530-aba2-86777a8db6cb\") " pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.755713 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.800059 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-config-data\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.800112 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-fernet-keys\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.800811 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-combined-ca-bundle\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.800899 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxd67\" (UniqueName: \"kubernetes.io/projected/5266d7e0-bd1b-4266-b8eb-af6080873ad5-kube-api-access-bxd67\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.801046 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-credential-keys\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.801182 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-scripts\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.855538 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.902648 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-combined-ca-bundle\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.902699 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxd67\" (UniqueName: \"kubernetes.io/projected/5266d7e0-bd1b-4266-b8eb-af6080873ad5-kube-api-access-bxd67\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.902756 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-credential-keys\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.902785 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-scripts\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.902831 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-config-data\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.902892 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-fernet-keys\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.907906 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-scripts\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.908449 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-fernet-keys\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.908493 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-config-data\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.909212 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-combined-ca-bundle\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.917814 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-credential-keys\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:39 crc kubenswrapper[4711]: I1202 10:32:39.918660 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxd67\" (UniqueName: \"kubernetes.io/projected/5266d7e0-bd1b-4266-b8eb-af6080873ad5-kube-api-access-bxd67\") pod \"keystone-bootstrap-pk6jj\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:40 crc kubenswrapper[4711]: I1202 10:32:40.123438 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:32:40 crc kubenswrapper[4711]: I1202 10:32:40.855149 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:32:40 crc kubenswrapper[4711]: I1202 10:32:40.923328 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-kl96n"] Dec 02 10:32:40 crc kubenswrapper[4711]: I1202 10:32:40.923637 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" podUID="21313175-93c7-4c32-b581-c77b63cea062" containerName="dnsmasq-dns" containerID="cri-o://e935f5145a6791ee54709d5061ea46b4b21b18f7d91a3c05d9f33ff26942c4ee" gracePeriod=10 Dec 02 10:32:41 crc kubenswrapper[4711]: I1202 10:32:41.095034 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="163592a2-c106-47a3-a114-d163861dde5b" path="/var/lib/kubelet/pods/163592a2-c106-47a3-a114-d163861dde5b/volumes" Dec 02 10:32:41 crc kubenswrapper[4711]: I1202 10:32:41.717538 4711 generic.go:334] "Generic (PLEG): container finished" podID="21313175-93c7-4c32-b581-c77b63cea062" containerID="e935f5145a6791ee54709d5061ea46b4b21b18f7d91a3c05d9f33ff26942c4ee" exitCode=0 Dec 02 10:32:41 crc kubenswrapper[4711]: I1202 10:32:41.717611 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" event={"ID":"21313175-93c7-4c32-b581-c77b63cea062","Type":"ContainerDied","Data":"e935f5145a6791ee54709d5061ea46b4b21b18f7d91a3c05d9f33ff26942c4ee"} Dec 02 10:32:43 crc kubenswrapper[4711]: I1202 10:32:43.634379 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" podUID="21313175-93c7-4c32-b581-c77b63cea062" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.718683 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.727338 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.757882 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7","Type":"ContainerDied","Data":"cece38de4fb1dcee0d7e9ad01c0f944886e0a451f488d6de775be84dd845bc78"} Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.758005 4711 scope.go:117] "RemoveContainer" containerID="2f5b6a5ceeb8ff8ad215a77749c3a0bb7947b7c3d6c440ba507c747a05d87e51" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.758164 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.762539 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3df96cab-0615-4b61-b0e5-726a6ef5b6e1","Type":"ContainerDied","Data":"9a65ff7860122a870891c572ee2426fe71a860f330da5d12a9654cb18f765733"} Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.762629 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912390 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-config-data\") pod \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912435 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-logs\") pod \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912484 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912524 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpszb\" (UniqueName: \"kubernetes.io/projected/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-kube-api-access-kpszb\") pod \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912541 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-combined-ca-bundle\") pod \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912561 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-httpd-run\") pod \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912609 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-config-data\") pod \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912653 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-scripts\") pod \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912675 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-scripts\") pod \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912720 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-public-tls-certs\") pod \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912743 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-httpd-run\") pod \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912758 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k594c\" (UniqueName: \"kubernetes.io/projected/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-kube-api-access-k594c\") pod \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912781 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-combined-ca-bundle\") pod \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\" (UID: \"f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912799 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-internal-tls-certs\") pod \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912861 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.912881 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-logs\") pod \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\" (UID: \"3df96cab-0615-4b61-b0e5-726a6ef5b6e1\") " Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.913038 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-logs" (OuterVolumeSpecName: "logs") pod "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" (UID: "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.913253 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.913293 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3df96cab-0615-4b61-b0e5-726a6ef5b6e1" (UID: "3df96cab-0615-4b61-b0e5-726a6ef5b6e1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.914215 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" (UID: "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.916922 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-logs" (OuterVolumeSpecName: "logs") pod "3df96cab-0615-4b61-b0e5-726a6ef5b6e1" (UID: "3df96cab-0615-4b61-b0e5-726a6ef5b6e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.919630 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-scripts" (OuterVolumeSpecName: "scripts") pod "3df96cab-0615-4b61-b0e5-726a6ef5b6e1" (UID: "3df96cab-0615-4b61-b0e5-726a6ef5b6e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.920460 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" (UID: "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.921235 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-kube-api-access-k594c" (OuterVolumeSpecName: "kube-api-access-k594c") pod "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" (UID: "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7"). InnerVolumeSpecName "kube-api-access-k594c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.922536 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "3df96cab-0615-4b61-b0e5-726a6ef5b6e1" (UID: "3df96cab-0615-4b61-b0e5-726a6ef5b6e1"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.935127 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-scripts" (OuterVolumeSpecName: "scripts") pod "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" (UID: "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.936082 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-kube-api-access-kpszb" (OuterVolumeSpecName: "kube-api-access-kpszb") pod "3df96cab-0615-4b61-b0e5-726a6ef5b6e1" (UID: "3df96cab-0615-4b61-b0e5-726a6ef5b6e1"). InnerVolumeSpecName "kube-api-access-kpszb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.955297 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3df96cab-0615-4b61-b0e5-726a6ef5b6e1" (UID: "3df96cab-0615-4b61-b0e5-726a6ef5b6e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.961762 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" (UID: "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.962296 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" (UID: "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.964075 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-config-data" (OuterVolumeSpecName: "config-data") pod "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" (UID: "f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.969028 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3df96cab-0615-4b61-b0e5-726a6ef5b6e1" (UID: "3df96cab-0615-4b61-b0e5-726a6ef5b6e1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:45 crc kubenswrapper[4711]: I1202 10:32:45.971159 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-config-data" (OuterVolumeSpecName: "config-data") pod "3df96cab-0615-4b61-b0e5-726a6ef5b6e1" (UID: "3df96cab-0615-4b61-b0e5-726a6ef5b6e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015254 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015291 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015303 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015312 4711 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015343 4711 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015353 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k594c\" (UniqueName: \"kubernetes.io/projected/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-kube-api-access-k594c\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015362 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015370 4711 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015433 4711 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015445 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015454 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015467 4711 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015494 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpszb\" (UniqueName: \"kubernetes.io/projected/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-kube-api-access-kpszb\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015503 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df96cab-0615-4b61-b0e5-726a6ef5b6e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.015511 4711 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.032866 4711 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.035779 4711 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.118402 4711 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.118452 4711 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.124111 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.150103 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.169256 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.178766 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:32:46 crc kubenswrapper[4711]: E1202 10:32:46.179201 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df96cab-0615-4b61-b0e5-726a6ef5b6e1" containerName="glance-log" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.179222 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df96cab-0615-4b61-b0e5-726a6ef5b6e1" containerName="glance-log" Dec 02 10:32:46 crc kubenswrapper[4711]: E1202 10:32:46.179237 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" containerName="glance-httpd" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.179244 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" containerName="glance-httpd" Dec 02 10:32:46 crc kubenswrapper[4711]: E1202 10:32:46.179268 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" containerName="glance-log" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.179274 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" containerName="glance-log" Dec 02 10:32:46 crc kubenswrapper[4711]: E1202 10:32:46.179289 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df96cab-0615-4b61-b0e5-726a6ef5b6e1" containerName="glance-httpd" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.179295 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df96cab-0615-4b61-b0e5-726a6ef5b6e1" containerName="glance-httpd" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.179447 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df96cab-0615-4b61-b0e5-726a6ef5b6e1" containerName="glance-httpd" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.179456 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" containerName="glance-httpd" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.179468 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df96cab-0615-4b61-b0e5-726a6ef5b6e1" containerName="glance-log" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.179487 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" containerName="glance-log" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.182385 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.184263 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qcf8c" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.186488 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.187813 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.191155 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.194782 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.213829 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.227485 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.240794 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.240922 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.244735 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.246824 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.322435 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.322516 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnprv\" (UniqueName: \"kubernetes.io/projected/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-kube-api-access-rnprv\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.322588 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.322651 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-scripts\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.322695 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.322716 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.322735 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-logs\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.322776 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-config-data\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.423779 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-scripts\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.423835 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.423870 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.423897 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.423925 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-logs\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.423969 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424000 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-config-data\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424034 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10ea6af-6f3d-468b-be7c-80e79fb0d899-logs\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424052 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424070 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e10ea6af-6f3d-468b-be7c-80e79fb0d899-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424103 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424119 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424102 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424546 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424619 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnprv\" (UniqueName: \"kubernetes.io/projected/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-kube-api-access-rnprv\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424652 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-logs\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424659 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424755 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.424838 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2xq8\" (UniqueName: \"kubernetes.io/projected/e10ea6af-6f3d-468b-be7c-80e79fb0d899-kube-api-access-b2xq8\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.429074 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-config-data\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.429970 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-scripts\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.432607 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.441263 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.441685 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnprv\" (UniqueName: \"kubernetes.io/projected/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-kube-api-access-rnprv\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.455417 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.504432 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.528675 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.529273 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10ea6af-6f3d-468b-be7c-80e79fb0d899-logs\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.529294 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.529309 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e10ea6af-6f3d-468b-be7c-80e79fb0d899-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.529344 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.529388 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.529419 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2xq8\" (UniqueName: \"kubernetes.io/projected/e10ea6af-6f3d-468b-be7c-80e79fb0d899-kube-api-access-b2xq8\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.529528 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.530095 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e10ea6af-6f3d-468b-be7c-80e79fb0d899-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.530323 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.530341 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10ea6af-6f3d-468b-be7c-80e79fb0d899-logs\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.534533 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.534810 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.535440 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.536257 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.561660 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.580600 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2xq8\" (UniqueName: \"kubernetes.io/projected/e10ea6af-6f3d-468b-be7c-80e79fb0d899-kube-api-access-b2xq8\") pod \"glance-default-internal-api-0\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:32:46 crc kubenswrapper[4711]: I1202 10:32:46.861202 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 10:32:47 crc kubenswrapper[4711]: I1202 10:32:47.089842 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df96cab-0615-4b61-b0e5-726a6ef5b6e1" path="/var/lib/kubelet/pods/3df96cab-0615-4b61-b0e5-726a6ef5b6e1/volumes" Dec 02 10:32:47 crc kubenswrapper[4711]: I1202 10:32:47.090507 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7" path="/var/lib/kubelet/pods/f382ee5c-12db-4ff7-adf0-00ec0f4bf9f7/volumes" Dec 02 10:32:47 crc kubenswrapper[4711]: E1202 10:32:47.720611 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Dec 02 10:32:47 crc kubenswrapper[4711]: E1202 10:32:47.721000 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jb9lf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-ss597_openstack(426ff483-f882-4d91-b5da-bab147d2886d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:32:47 crc kubenswrapper[4711]: E1202 10:32:47.722554 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-ss597" podUID="426ff483-f882-4d91-b5da-bab147d2886d" Dec 02 10:32:47 crc kubenswrapper[4711]: E1202 10:32:47.745396 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 02 10:32:47 crc kubenswrapper[4711]: E1202 10:32:47.745616 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n85h5dfh688h58bh589h9fhc5h689h546hb5h694h96h4h575hb7hdh5bbh66fh59bh58fhc6h5cch59ch584h695h55ch9ch648h5c6hbdh59bhc8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqldj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7475865c97-ctdkx_openstack(1771ec89-03e5-4202-953a-7c745e18b7f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:32:47 crc kubenswrapper[4711]: E1202 10:32:47.747823 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7475865c97-ctdkx" podUID="1771ec89-03e5-4202-953a-7c745e18b7f1" Dec 02 10:32:47 crc kubenswrapper[4711]: E1202 10:32:47.781243 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-ss597" podUID="426ff483-f882-4d91-b5da-bab147d2886d" Dec 02 10:32:48 crc kubenswrapper[4711]: I1202 10:32:48.634735 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" podUID="21313175-93c7-4c32-b581-c77b63cea062" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Dec 02 10:32:49 crc kubenswrapper[4711]: E1202 10:32:49.292085 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 02 10:32:49 crc kubenswrapper[4711]: E1202 10:32:49.292571 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n554h5fch574h4h6chb6h66ch699hc6h564h646h9dhd9h5b7h68h54hffh4h575hcdhcdh96h59dh7fhbh658h579h585h646hbh59dh558q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-smjm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6f6bd6db97-7jhr9_openstack(63935e2a-d282-4b50-b62b-d89c82a6ef1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:32:49 crc kubenswrapper[4711]: E1202 10:32:49.295052 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6f6bd6db97-7jhr9" podUID="63935e2a-d282-4b50-b62b-d89c82a6ef1f" Dec 02 10:32:52 crc kubenswrapper[4711]: I1202 10:32:52.614349 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:32:52 crc kubenswrapper[4711]: I1202 10:32:52.614645 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.593767 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.634802 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" podUID="21313175-93c7-4c32-b581-c77b63cea062" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.635055 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.763802 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1771ec89-03e5-4202-953a-7c745e18b7f1-scripts\") pod \"1771ec89-03e5-4202-953a-7c745e18b7f1\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.763895 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1771ec89-03e5-4202-953a-7c745e18b7f1-horizon-secret-key\") pod \"1771ec89-03e5-4202-953a-7c745e18b7f1\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.764083 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqldj\" (UniqueName: \"kubernetes.io/projected/1771ec89-03e5-4202-953a-7c745e18b7f1-kube-api-access-jqldj\") pod \"1771ec89-03e5-4202-953a-7c745e18b7f1\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.764140 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1771ec89-03e5-4202-953a-7c745e18b7f1-config-data\") pod \"1771ec89-03e5-4202-953a-7c745e18b7f1\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.764219 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1771ec89-03e5-4202-953a-7c745e18b7f1-logs\") pod \"1771ec89-03e5-4202-953a-7c745e18b7f1\" (UID: \"1771ec89-03e5-4202-953a-7c745e18b7f1\") " Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.764686 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1771ec89-03e5-4202-953a-7c745e18b7f1-scripts" (OuterVolumeSpecName: "scripts") pod "1771ec89-03e5-4202-953a-7c745e18b7f1" (UID: "1771ec89-03e5-4202-953a-7c745e18b7f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.765021 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1771ec89-03e5-4202-953a-7c745e18b7f1-logs" (OuterVolumeSpecName: "logs") pod "1771ec89-03e5-4202-953a-7c745e18b7f1" (UID: "1771ec89-03e5-4202-953a-7c745e18b7f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.766234 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1771ec89-03e5-4202-953a-7c745e18b7f1-config-data" (OuterVolumeSpecName: "config-data") pod "1771ec89-03e5-4202-953a-7c745e18b7f1" (UID: "1771ec89-03e5-4202-953a-7c745e18b7f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.773164 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1771ec89-03e5-4202-953a-7c745e18b7f1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1771ec89-03e5-4202-953a-7c745e18b7f1" (UID: "1771ec89-03e5-4202-953a-7c745e18b7f1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.775335 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1771ec89-03e5-4202-953a-7c745e18b7f1-kube-api-access-jqldj" (OuterVolumeSpecName: "kube-api-access-jqldj") pod "1771ec89-03e5-4202-953a-7c745e18b7f1" (UID: "1771ec89-03e5-4202-953a-7c745e18b7f1"). InnerVolumeSpecName "kube-api-access-jqldj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.866165 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1771ec89-03e5-4202-953a-7c745e18b7f1-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.866215 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1771ec89-03e5-4202-953a-7c745e18b7f1-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.866228 4711 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1771ec89-03e5-4202-953a-7c745e18b7f1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.866243 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqldj\" (UniqueName: \"kubernetes.io/projected/1771ec89-03e5-4202-953a-7c745e18b7f1-kube-api-access-jqldj\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.866255 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1771ec89-03e5-4202-953a-7c745e18b7f1-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:58 crc kubenswrapper[4711]: E1202 10:32:58.885722 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 02 10:32:58 crc kubenswrapper[4711]: E1202 10:32:58.886047 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65ch596h9bh5f4h77h687h87h698hbfh5ffh544h5d5h687h97h59dhcbh688h656h5dfh699h65dh58ch78h597hb9h64bhb5h5c9h5cdh98hd6hddq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2krl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(98dbf68a-a027-4b09-a124-5438406d4b4f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.906698 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7475865c97-ctdkx" event={"ID":"1771ec89-03e5-4202-953a-7c745e18b7f1","Type":"ContainerDied","Data":"c2e3bff098ead1a68187e0150a5f2eeae79a21be00c803d18f6797cdcdd1bca8"} Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.906778 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7475865c97-ctdkx" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.980617 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:32:58 crc kubenswrapper[4711]: I1202 10:32:58.991420 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.036892 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7475865c97-ctdkx"] Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.054089 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7475865c97-ctdkx"] Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.091291 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1771ec89-03e5-4202-953a-7c745e18b7f1" path="/var/lib/kubelet/pods/1771ec89-03e5-4202-953a-7c745e18b7f1/volumes" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.169280 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63935e2a-d282-4b50-b62b-d89c82a6ef1f-logs\") pod \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.169354 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63935e2a-d282-4b50-b62b-d89c82a6ef1f-scripts\") pod \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.169443 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pqhk\" (UniqueName: \"kubernetes.io/projected/21313175-93c7-4c32-b581-c77b63cea062-kube-api-access-2pqhk\") pod \"21313175-93c7-4c32-b581-c77b63cea062\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.169503 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-dns-svc\") pod \"21313175-93c7-4c32-b581-c77b63cea062\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.169548 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-ovsdbserver-sb\") pod \"21313175-93c7-4c32-b581-c77b63cea062\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.169578 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-config\") pod \"21313175-93c7-4c32-b581-c77b63cea062\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.169594 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63935e2a-d282-4b50-b62b-d89c82a6ef1f-horizon-secret-key\") pod \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.169660 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63935e2a-d282-4b50-b62b-d89c82a6ef1f-config-data\") pod \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.169658 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63935e2a-d282-4b50-b62b-d89c82a6ef1f-logs" (OuterVolumeSpecName: "logs") pod "63935e2a-d282-4b50-b62b-d89c82a6ef1f" (UID: "63935e2a-d282-4b50-b62b-d89c82a6ef1f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.169686 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smjm9\" (UniqueName: \"kubernetes.io/projected/63935e2a-d282-4b50-b62b-d89c82a6ef1f-kube-api-access-smjm9\") pod \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\" (UID: \"63935e2a-d282-4b50-b62b-d89c82a6ef1f\") " Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.169737 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-ovsdbserver-nb\") pod \"21313175-93c7-4c32-b581-c77b63cea062\" (UID: \"21313175-93c7-4c32-b581-c77b63cea062\") " Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.169840 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63935e2a-d282-4b50-b62b-d89c82a6ef1f-scripts" (OuterVolumeSpecName: "scripts") pod "63935e2a-d282-4b50-b62b-d89c82a6ef1f" (UID: "63935e2a-d282-4b50-b62b-d89c82a6ef1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.170133 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63935e2a-d282-4b50-b62b-d89c82a6ef1f-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.170146 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63935e2a-d282-4b50-b62b-d89c82a6ef1f-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.170474 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63935e2a-d282-4b50-b62b-d89c82a6ef1f-config-data" (OuterVolumeSpecName: "config-data") pod "63935e2a-d282-4b50-b62b-d89c82a6ef1f" (UID: "63935e2a-d282-4b50-b62b-d89c82a6ef1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.174060 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63935e2a-d282-4b50-b62b-d89c82a6ef1f-kube-api-access-smjm9" (OuterVolumeSpecName: "kube-api-access-smjm9") pod "63935e2a-d282-4b50-b62b-d89c82a6ef1f" (UID: "63935e2a-d282-4b50-b62b-d89c82a6ef1f"). InnerVolumeSpecName "kube-api-access-smjm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.174060 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63935e2a-d282-4b50-b62b-d89c82a6ef1f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "63935e2a-d282-4b50-b62b-d89c82a6ef1f" (UID: "63935e2a-d282-4b50-b62b-d89c82a6ef1f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.174379 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21313175-93c7-4c32-b581-c77b63cea062-kube-api-access-2pqhk" (OuterVolumeSpecName: "kube-api-access-2pqhk") pod "21313175-93c7-4c32-b581-c77b63cea062" (UID: "21313175-93c7-4c32-b581-c77b63cea062"). InnerVolumeSpecName "kube-api-access-2pqhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.215927 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21313175-93c7-4c32-b581-c77b63cea062" (UID: "21313175-93c7-4c32-b581-c77b63cea062"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.216740 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "21313175-93c7-4c32-b581-c77b63cea062" (UID: "21313175-93c7-4c32-b581-c77b63cea062"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.218502 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "21313175-93c7-4c32-b581-c77b63cea062" (UID: "21313175-93c7-4c32-b581-c77b63cea062"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.220310 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-config" (OuterVolumeSpecName: "config") pod "21313175-93c7-4c32-b581-c77b63cea062" (UID: "21313175-93c7-4c32-b581-c77b63cea062"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.273909 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.273945 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pqhk\" (UniqueName: \"kubernetes.io/projected/21313175-93c7-4c32-b581-c77b63cea062-kube-api-access-2pqhk\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.273977 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.273990 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.274001 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21313175-93c7-4c32-b581-c77b63cea062-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.274012 4711 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63935e2a-d282-4b50-b62b-d89c82a6ef1f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.274022 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63935e2a-d282-4b50-b62b-d89c82a6ef1f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.274033 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smjm9\" (UniqueName: \"kubernetes.io/projected/63935e2a-d282-4b50-b62b-d89c82a6ef1f-kube-api-access-smjm9\") on node \"crc\" DevicePath \"\"" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.920125 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6bd6db97-7jhr9" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.920125 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6bd6db97-7jhr9" event={"ID":"63935e2a-d282-4b50-b62b-d89c82a6ef1f","Type":"ContainerDied","Data":"32ec00a0defb81147dedd9144a060c7107a2f40c9ea2e59f3550e2e3d7a233f9"} Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.923738 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" event={"ID":"21313175-93c7-4c32-b581-c77b63cea062","Type":"ContainerDied","Data":"6791d63f670dfe2c8c794ec9853f3a2ad1c0880b084cf939e3fdb2a0c2d98763"} Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.923787 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.973681 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6bd6db97-7jhr9"] Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.985780 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f6bd6db97-7jhr9"] Dec 02 10:32:59 crc kubenswrapper[4711]: I1202 10:32:59.994149 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-kl96n"] Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.007854 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-kl96n"] Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.111035 4711 scope.go:117] "RemoveContainer" containerID="537fcbb32dc8c66417a2c8610fd60bc7d5bd8dd6ed131363896134067937d9b1" Dec 02 10:33:00 crc kubenswrapper[4711]: E1202 10:33:00.121324 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 02 10:33:00 crc kubenswrapper[4711]: E1202 10:33:00.121486 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jf2tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-hhnhk_openstack(c36d7741-4744-4076-ad79-2cd1aca48cec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:33:00 crc kubenswrapper[4711]: E1202 10:33:00.123742 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-hhnhk" podUID="c36d7741-4744-4076-ad79-2cd1aca48cec" Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.305217 4711 scope.go:117] "RemoveContainer" containerID="f403fa777c2386a53d1038c3ca222af76a8016dd57e7d67f007746b98025b722" Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.359879 4711 scope.go:117] "RemoveContainer" containerID="65342d7dbe7fd7c46edffd08d081be8fd6d2bb22736216cf809b8e547a7e4f50" Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.393193 4711 scope.go:117] "RemoveContainer" containerID="e935f5145a6791ee54709d5061ea46b4b21b18f7d91a3c05d9f33ff26942c4ee" Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.452133 4711 scope.go:117] "RemoveContainer" containerID="5a1ec9d459dcfd3975177c7b1cb543458adc5ba2b1ca30f1f0413575ccd99ed0" Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.667002 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76998c6f5b-xhr78"] Dec 02 10:33:00 crc kubenswrapper[4711]: W1202 10:33:00.672821 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9435ea7_574e_4a04_ad38_aa7a1cd82ebd.slice/crio-7beed1c54625f990a56f45443cb1ec2cf01afe76157982c7e8bf8211fd77e870 WatchSource:0}: Error finding container 7beed1c54625f990a56f45443cb1ec2cf01afe76157982c7e8bf8211fd77e870: Status 404 returned error can't find the container with id 7beed1c54625f990a56f45443cb1ec2cf01afe76157982c7e8bf8211fd77e870 Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.864327 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pk6jj"] Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.884067 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b4d9565bd-5nwjn"] Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.931597 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76998c6f5b-xhr78" event={"ID":"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd","Type":"ContainerStarted","Data":"7beed1c54625f990a56f45443cb1ec2cf01afe76157982c7e8bf8211fd77e870"} Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.934671 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xfj2j" event={"ID":"708582b5-ed1b-43e9-959a-482979700291","Type":"ContainerStarted","Data":"847b4ebc5d451fed55329ad84a6c8ab342e34e014b63d8e09a820217604ba908"} Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.939032 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7855f9b6bf-w42l8" podUID="e2d7602c-dae1-4110-b8db-aa51a0761754" containerName="horizon-log" containerID="cri-o://fefb51e0fa93ac740dbe66a6ebc38cd2d7b69807a2a926149dacefe8912ebeba" gracePeriod=30 Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.939336 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7855f9b6bf-w42l8" event={"ID":"e2d7602c-dae1-4110-b8db-aa51a0761754","Type":"ContainerStarted","Data":"f4ff8ab094e9c8e8e92082c13500bdddc1d241cdf4952cc6b2a062016c5737c6"} Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.939370 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7855f9b6bf-w42l8" event={"ID":"e2d7602c-dae1-4110-b8db-aa51a0761754","Type":"ContainerStarted","Data":"fefb51e0fa93ac740dbe66a6ebc38cd2d7b69807a2a926149dacefe8912ebeba"} Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.939435 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7855f9b6bf-w42l8" podUID="e2d7602c-dae1-4110-b8db-aa51a0761754" containerName="horizon" containerID="cri-o://f4ff8ab094e9c8e8e92082c13500bdddc1d241cdf4952cc6b2a062016c5737c6" gracePeriod=30 Dec 02 10:33:00 crc kubenswrapper[4711]: E1202 10:33:00.940941 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-hhnhk" podUID="c36d7741-4744-4076-ad79-2cd1aca48cec" Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.959683 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xfj2j" podStartSLOduration=3.664243356 podStartE2EDuration="30.959663063s" podCreationTimestamp="2025-12-02 10:32:30 +0000 UTC" firstStartedPulling="2025-12-02 10:32:31.60710586 +0000 UTC m=+1141.316472307" lastFinishedPulling="2025-12-02 10:32:58.902525567 +0000 UTC m=+1168.611892014" observedRunningTime="2025-12-02 10:33:00.9540231 +0000 UTC m=+1170.663389577" watchObservedRunningTime="2025-12-02 10:33:00.959663063 +0000 UTC m=+1170.669029520" Dec 02 10:33:00 crc kubenswrapper[4711]: I1202 10:33:00.995365 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7855f9b6bf-w42l8" podStartSLOduration=4.587459178 podStartE2EDuration="31.995341642s" podCreationTimestamp="2025-12-02 10:32:29 +0000 UTC" firstStartedPulling="2025-12-02 10:32:31.494600662 +0000 UTC m=+1141.203967109" lastFinishedPulling="2025-12-02 10:32:58.902483126 +0000 UTC m=+1168.611849573" observedRunningTime="2025-12-02 10:33:00.987655064 +0000 UTC m=+1170.697021511" watchObservedRunningTime="2025-12-02 10:33:00.995341642 +0000 UTC m=+1170.704708089" Dec 02 10:33:01 crc kubenswrapper[4711]: W1202 10:33:01.096367 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5e4731d_0cea_4530_aba2_86777a8db6cb.slice/crio-745ea6d205926abad42ab73784be844b691040a4bccac3883e92a6e1f0749c0b WatchSource:0}: Error finding container 745ea6d205926abad42ab73784be844b691040a4bccac3883e92a6e1f0749c0b: Status 404 returned error can't find the container with id 745ea6d205926abad42ab73784be844b691040a4bccac3883e92a6e1f0749c0b Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.097748 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21313175-93c7-4c32-b581-c77b63cea062" path="/var/lib/kubelet/pods/21313175-93c7-4c32-b581-c77b63cea062/volumes" Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.098811 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63935e2a-d282-4b50-b62b-d89c82a6ef1f" path="/var/lib/kubelet/pods/63935e2a-d282-4b50-b62b-d89c82a6ef1f/volumes" Dec 02 10:33:01 crc kubenswrapper[4711]: W1202 10:33:01.100519 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5266d7e0_bd1b_4266_b8eb_af6080873ad5.slice/crio-4d8350de2130d5e51354bead2142f707d90e291628fa2f795af85c56b3845286 WatchSource:0}: Error finding container 4d8350de2130d5e51354bead2142f707d90e291628fa2f795af85c56b3845286: Status 404 returned error can't find the container with id 4d8350de2130d5e51354bead2142f707d90e291628fa2f795af85c56b3845286 Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.187720 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.884790 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.949182 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98dbf68a-a027-4b09-a124-5438406d4b4f","Type":"ContainerStarted","Data":"4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff"} Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.952033 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4d9565bd-5nwjn" event={"ID":"a5e4731d-0cea-4530-aba2-86777a8db6cb","Type":"ContainerStarted","Data":"22b7b533dc3844c50b1cc44321e4485bf500c2293d3e254172dafdd83ab5776e"} Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.952072 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4d9565bd-5nwjn" event={"ID":"a5e4731d-0cea-4530-aba2-86777a8db6cb","Type":"ContainerStarted","Data":"745ea6d205926abad42ab73784be844b691040a4bccac3883e92a6e1f0749c0b"} Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.954410 4711 generic.go:334] "Generic (PLEG): container finished" podID="44e7dd62-8534-48ac-9b10-3cafac8b1192" containerID="5447f2d26caf26f18e97fa62c49a031feb64bd956fbad5d280640e5eb96be6bf" exitCode=0 Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.954493 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rllxq" event={"ID":"44e7dd62-8534-48ac-9b10-3cafac8b1192","Type":"ContainerDied","Data":"5447f2d26caf26f18e97fa62c49a031feb64bd956fbad5d280640e5eb96be6bf"} Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.958968 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pk6jj" event={"ID":"5266d7e0-bd1b-4266-b8eb-af6080873ad5","Type":"ContainerStarted","Data":"49b8995f76493f4605ebe7c93ebc2d87e990271c794df311ee5a27f6ab3b0e2f"} Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.959033 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pk6jj" event={"ID":"5266d7e0-bd1b-4266-b8eb-af6080873ad5","Type":"ContainerStarted","Data":"4d8350de2130d5e51354bead2142f707d90e291628fa2f795af85c56b3845286"} Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.969625 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76998c6f5b-xhr78" event={"ID":"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd","Type":"ContainerStarted","Data":"fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571"} Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.971644 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e10ea6af-6f3d-468b-be7c-80e79fb0d899","Type":"ContainerStarted","Data":"60e642d9bcea417243f93c0d627808b200fdc0e86055d38d830ff53c73104078"} Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.975102 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7bbf095c-359d-4e14-95e8-d75e57a7f7c2","Type":"ContainerStarted","Data":"301ab809156b22229120bb0a606e7e4ac0cb377c2838c0b0ab3f3846b7c20bbb"} Dec 02 10:33:01 crc kubenswrapper[4711]: I1202 10:33:01.990758 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pk6jj" podStartSLOduration=22.990734588 podStartE2EDuration="22.990734588s" podCreationTimestamp="2025-12-02 10:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:01.982243527 +0000 UTC m=+1171.691609984" watchObservedRunningTime="2025-12-02 10:33:01.990734588 +0000 UTC m=+1171.700101035" Dec 02 10:33:02 crc kubenswrapper[4711]: I1202 10:33:02.993601 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e10ea6af-6f3d-468b-be7c-80e79fb0d899","Type":"ContainerStarted","Data":"f0894e3708f5e05cad43db87fe46de58995ed3905fec3957c2ca884be2d0545c"} Dec 02 10:33:02 crc kubenswrapper[4711]: I1202 10:33:02.997144 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7bbf095c-359d-4e14-95e8-d75e57a7f7c2","Type":"ContainerStarted","Data":"948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb"} Dec 02 10:33:02 crc kubenswrapper[4711]: I1202 10:33:02.998815 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4d9565bd-5nwjn" event={"ID":"a5e4731d-0cea-4530-aba2-86777a8db6cb","Type":"ContainerStarted","Data":"7e309eb24f98af650af836ee7220975a623d1ab7040d55a4fdef0d2a2da172e7"} Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.001233 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ss597" event={"ID":"426ff483-f882-4d91-b5da-bab147d2886d","Type":"ContainerStarted","Data":"6cac01906a9791b422daf53ba8810283b460891ae2b777371dc1dca71ca8866c"} Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.006323 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76998c6f5b-xhr78" event={"ID":"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd","Type":"ContainerStarted","Data":"889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd"} Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.038638 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b4d9565bd-5nwjn" podStartSLOduration=24.0386163 podStartE2EDuration="24.0386163s" podCreationTimestamp="2025-12-02 10:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:03.0253323 +0000 UTC m=+1172.734698747" watchObservedRunningTime="2025-12-02 10:33:03.0386163 +0000 UTC m=+1172.747982747" Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.072860 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ss597" podStartSLOduration=1.980737305 podStartE2EDuration="33.072831721s" podCreationTimestamp="2025-12-02 10:32:30 +0000 UTC" firstStartedPulling="2025-12-02 10:32:31.581480463 +0000 UTC m=+1141.290846910" lastFinishedPulling="2025-12-02 10:33:02.673574879 +0000 UTC m=+1172.382941326" observedRunningTime="2025-12-02 10:33:03.043934205 +0000 UTC m=+1172.753300642" watchObservedRunningTime="2025-12-02 10:33:03.072831721 +0000 UTC m=+1172.782198178" Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.088820 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-76998c6f5b-xhr78" podStartSLOduration=24.088546348 podStartE2EDuration="24.088546348s" podCreationTimestamp="2025-12-02 10:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:03.068870433 +0000 UTC m=+1172.778236900" watchObservedRunningTime="2025-12-02 10:33:03.088546348 +0000 UTC m=+1172.797912795" Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.385142 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rllxq" Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.464531 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44e7dd62-8534-48ac-9b10-3cafac8b1192-config\") pod \"44e7dd62-8534-48ac-9b10-3cafac8b1192\" (UID: \"44e7dd62-8534-48ac-9b10-3cafac8b1192\") " Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.464933 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvpnc\" (UniqueName: \"kubernetes.io/projected/44e7dd62-8534-48ac-9b10-3cafac8b1192-kube-api-access-mvpnc\") pod \"44e7dd62-8534-48ac-9b10-3cafac8b1192\" (UID: \"44e7dd62-8534-48ac-9b10-3cafac8b1192\") " Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.465378 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7dd62-8534-48ac-9b10-3cafac8b1192-combined-ca-bundle\") pod \"44e7dd62-8534-48ac-9b10-3cafac8b1192\" (UID: \"44e7dd62-8534-48ac-9b10-3cafac8b1192\") " Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.474187 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e7dd62-8534-48ac-9b10-3cafac8b1192-kube-api-access-mvpnc" (OuterVolumeSpecName: "kube-api-access-mvpnc") pod "44e7dd62-8534-48ac-9b10-3cafac8b1192" (UID: "44e7dd62-8534-48ac-9b10-3cafac8b1192"). InnerVolumeSpecName "kube-api-access-mvpnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.498225 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e7dd62-8534-48ac-9b10-3cafac8b1192-config" (OuterVolumeSpecName: "config") pod "44e7dd62-8534-48ac-9b10-3cafac8b1192" (UID: "44e7dd62-8534-48ac-9b10-3cafac8b1192"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.520069 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e7dd62-8534-48ac-9b10-3cafac8b1192-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44e7dd62-8534-48ac-9b10-3cafac8b1192" (UID: "44e7dd62-8534-48ac-9b10-3cafac8b1192"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.568813 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7dd62-8534-48ac-9b10-3cafac8b1192-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.568850 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/44e7dd62-8534-48ac-9b10-3cafac8b1192-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.568866 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvpnc\" (UniqueName: \"kubernetes.io/projected/44e7dd62-8534-48ac-9b10-3cafac8b1192-kube-api-access-mvpnc\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:03 crc kubenswrapper[4711]: I1202 10:33:03.635811 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-kl96n" podUID="21313175-93c7-4c32-b581-c77b63cea062" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.019653 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e10ea6af-6f3d-468b-be7c-80e79fb0d899","Type":"ContainerStarted","Data":"6c916be1505ac21d2991032d59d831754d806cb475012bfd7ab4b2a0ca87f928"} Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.039867 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7bbf095c-359d-4e14-95e8-d75e57a7f7c2","Type":"ContainerStarted","Data":"d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076"} Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.048602 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=18.048582893 podStartE2EDuration="18.048582893s" podCreationTimestamp="2025-12-02 10:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:04.046977999 +0000 UTC m=+1173.756344446" watchObservedRunningTime="2025-12-02 10:33:04.048582893 +0000 UTC m=+1173.757949340" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.049307 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rllxq" event={"ID":"44e7dd62-8534-48ac-9b10-3cafac8b1192","Type":"ContainerDied","Data":"eb9a6a931a0188ebc90a981ea3b3bf17f21c9adb22b783edc2f2835b2647d9b8"} Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.049341 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb9a6a931a0188ebc90a981ea3b3bf17f21c9adb22b783edc2f2835b2647d9b8" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.049496 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rllxq" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.112448 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=18.112428398 podStartE2EDuration="18.112428398s" podCreationTimestamp="2025-12-02 10:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:04.088078416 +0000 UTC m=+1173.797444883" watchObservedRunningTime="2025-12-02 10:33:04.112428398 +0000 UTC m=+1173.821794845" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.193525 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-jvjm2"] Dec 02 10:33:04 crc kubenswrapper[4711]: E1202 10:33:04.193944 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21313175-93c7-4c32-b581-c77b63cea062" containerName="init" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.193981 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="21313175-93c7-4c32-b581-c77b63cea062" containerName="init" Dec 02 10:33:04 crc kubenswrapper[4711]: E1202 10:33:04.194006 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e7dd62-8534-48ac-9b10-3cafac8b1192" containerName="neutron-db-sync" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.194012 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e7dd62-8534-48ac-9b10-3cafac8b1192" containerName="neutron-db-sync" Dec 02 10:33:04 crc kubenswrapper[4711]: E1202 10:33:04.194040 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21313175-93c7-4c32-b581-c77b63cea062" containerName="dnsmasq-dns" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.194046 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="21313175-93c7-4c32-b581-c77b63cea062" containerName="dnsmasq-dns" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.194256 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e7dd62-8534-48ac-9b10-3cafac8b1192" containerName="neutron-db-sync" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.194428 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="21313175-93c7-4c32-b581-c77b63cea062" containerName="dnsmasq-dns" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.195315 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.219499 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-jvjm2"] Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.290750 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c7694fb9b-29wkn"] Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.292248 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.292376 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.292447 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.292526 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-dns-svc\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.292549 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-config\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.292601 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55kg9\" (UniqueName: \"kubernetes.io/projected/c2a80451-d670-436c-9da0-20e3aec8e2ad-kube-api-access-55kg9\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.292633 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.298214 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.298424 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.298530 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.303770 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hrxtj" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.304519 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c7694fb9b-29wkn"] Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.394046 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttkqh\" (UniqueName: \"kubernetes.io/projected/87082f2a-7ab6-44ad-95e0-226a0b7b416d-kube-api-access-ttkqh\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.394117 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-config\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.394154 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55kg9\" (UniqueName: \"kubernetes.io/projected/c2a80451-d670-436c-9da0-20e3aec8e2ad-kube-api-access-55kg9\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.394214 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.394268 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.394305 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-httpd-config\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.394359 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.394388 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-ovndb-tls-certs\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.394416 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-combined-ca-bundle\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.394439 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-dns-svc\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.394462 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-config\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.395289 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-config\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.396118 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.396670 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.397218 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.397813 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-dns-svc\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.426723 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55kg9\" (UniqueName: \"kubernetes.io/projected/c2a80451-d670-436c-9da0-20e3aec8e2ad-kube-api-access-55kg9\") pod \"dnsmasq-dns-6b7b667979-jvjm2\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.496483 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-httpd-config\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.496589 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-ovndb-tls-certs\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.496629 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-combined-ca-bundle\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.496700 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttkqh\" (UniqueName: \"kubernetes.io/projected/87082f2a-7ab6-44ad-95e0-226a0b7b416d-kube-api-access-ttkqh\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.496734 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-config\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.502678 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-config\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.504224 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-combined-ca-bundle\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.509036 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-httpd-config\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.515248 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-ovndb-tls-certs\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.520063 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttkqh\" (UniqueName: \"kubernetes.io/projected/87082f2a-7ab6-44ad-95e0-226a0b7b416d-kube-api-access-ttkqh\") pod \"neutron-6c7694fb9b-29wkn\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.526983 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:04 crc kubenswrapper[4711]: I1202 10:33:04.634913 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:05 crc kubenswrapper[4711]: I1202 10:33:05.227781 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-jvjm2"] Dec 02 10:33:05 crc kubenswrapper[4711]: I1202 10:33:05.472136 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c7694fb9b-29wkn"] Dec 02 10:33:05 crc kubenswrapper[4711]: W1202 10:33:05.472164 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87082f2a_7ab6_44ad_95e0_226a0b7b416d.slice/crio-0d5e287981487b0e11bf1bec9cc5f7d9eace165697d80830569d723149eecf99 WatchSource:0}: Error finding container 0d5e287981487b0e11bf1bec9cc5f7d9eace165697d80830569d723149eecf99: Status 404 returned error can't find the container with id 0d5e287981487b0e11bf1bec9cc5f7d9eace165697d80830569d723149eecf99 Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.084643 4711 generic.go:334] "Generic (PLEG): container finished" podID="c2a80451-d670-436c-9da0-20e3aec8e2ad" containerID="13f6e115c5c98c7af32a89dd1aa6aff2c5fb3c0deb9942298a086d8a09c382f9" exitCode=0 Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.085231 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" event={"ID":"c2a80451-d670-436c-9da0-20e3aec8e2ad","Type":"ContainerDied","Data":"13f6e115c5c98c7af32a89dd1aa6aff2c5fb3c0deb9942298a086d8a09c382f9"} Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.085906 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" event={"ID":"c2a80451-d670-436c-9da0-20e3aec8e2ad","Type":"ContainerStarted","Data":"8233e68325d49b1a616707b6115119ea13c558a3642d5fcf4facc13195354585"} Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.091616 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c7694fb9b-29wkn" event={"ID":"87082f2a-7ab6-44ad-95e0-226a0b7b416d","Type":"ContainerStarted","Data":"65719ca8b2cd671cf2d07b24c7c6b6545e86c7b7ba2e9697aa489beb7f8b36d2"} Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.091664 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c7694fb9b-29wkn" event={"ID":"87082f2a-7ab6-44ad-95e0-226a0b7b416d","Type":"ContainerStarted","Data":"0d5e287981487b0e11bf1bec9cc5f7d9eace165697d80830569d723149eecf99"} Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.505740 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.505802 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.545072 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.561334 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.861990 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.862834 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.914412 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b445d9db9-64xt2"] Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.916027 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.919812 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.921575 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b445d9db9-64xt2"] Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.923284 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 02 10:33:06 crc kubenswrapper[4711]: I1202 10:33:06.979182 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.007145 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.051197 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-internal-tls-certs\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.051295 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-httpd-config\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.051328 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-combined-ca-bundle\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.051375 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-ovndb-tls-certs\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.051411 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcrjm\" (UniqueName: \"kubernetes.io/projected/886c7d1f-5204-436e-a656-68b1ac98b586-kube-api-access-jcrjm\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.051478 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-config\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.051517 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-public-tls-certs\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.113636 4711 generic.go:334] "Generic (PLEG): container finished" podID="5266d7e0-bd1b-4266-b8eb-af6080873ad5" containerID="49b8995f76493f4605ebe7c93ebc2d87e990271c794df311ee5a27f6ab3b0e2f" exitCode=0 Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.113719 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pk6jj" event={"ID":"5266d7e0-bd1b-4266-b8eb-af6080873ad5","Type":"ContainerDied","Data":"49b8995f76493f4605ebe7c93ebc2d87e990271c794df311ee5a27f6ab3b0e2f"} Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.131425 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c7694fb9b-29wkn" event={"ID":"87082f2a-7ab6-44ad-95e0-226a0b7b416d","Type":"ContainerStarted","Data":"a4614fb25fe8ccc103806abab23ae35b75858afdde026a38f1440114e1ab7712"} Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.132869 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.134481 4711 generic.go:334] "Generic (PLEG): container finished" podID="708582b5-ed1b-43e9-959a-482979700291" containerID="847b4ebc5d451fed55329ad84a6c8ab342e34e014b63d8e09a820217604ba908" exitCode=0 Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.134533 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xfj2j" event={"ID":"708582b5-ed1b-43e9-959a-482979700291","Type":"ContainerDied","Data":"847b4ebc5d451fed55329ad84a6c8ab342e34e014b63d8e09a820217604ba908"} Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.135572 4711 generic.go:334] "Generic (PLEG): container finished" podID="426ff483-f882-4d91-b5da-bab147d2886d" containerID="6cac01906a9791b422daf53ba8810283b460891ae2b777371dc1dca71ca8866c" exitCode=0 Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.135625 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ss597" event={"ID":"426ff483-f882-4d91-b5da-bab147d2886d","Type":"ContainerDied","Data":"6cac01906a9791b422daf53ba8810283b460891ae2b777371dc1dca71ca8866c"} Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.138333 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" event={"ID":"c2a80451-d670-436c-9da0-20e3aec8e2ad","Type":"ContainerStarted","Data":"f48da6b8eb9365751b6196964e060694bcc396d60ce033a727cf4a1fcea6cf8d"} Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.138387 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.138400 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.138407 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.138430 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.138862 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.152610 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcrjm\" (UniqueName: \"kubernetes.io/projected/886c7d1f-5204-436e-a656-68b1ac98b586-kube-api-access-jcrjm\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.152689 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-config\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.152823 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-public-tls-certs\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.152920 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-internal-tls-certs\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.154793 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-httpd-config\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.154887 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-combined-ca-bundle\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.155619 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-ovndb-tls-certs\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.160726 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-config\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.175877 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-ovndb-tls-certs\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.176305 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-internal-tls-certs\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.177582 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-public-tls-certs\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.184709 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-httpd-config\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.198848 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886c7d1f-5204-436e-a656-68b1ac98b586-combined-ca-bundle\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.213996 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcrjm\" (UniqueName: \"kubernetes.io/projected/886c7d1f-5204-436e-a656-68b1ac98b586-kube-api-access-jcrjm\") pod \"neutron-7b445d9db9-64xt2\" (UID: \"886c7d1f-5204-436e-a656-68b1ac98b586\") " pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.217276 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" podStartSLOduration=3.21724379 podStartE2EDuration="3.21724379s" podCreationTimestamp="2025-12-02 10:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:07.214388563 +0000 UTC m=+1176.923755020" watchObservedRunningTime="2025-12-02 10:33:07.21724379 +0000 UTC m=+1176.926610237" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.280601 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:07 crc kubenswrapper[4711]: I1202 10:33:07.292125 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c7694fb9b-29wkn" podStartSLOduration=3.292109365 podStartE2EDuration="3.292109365s" podCreationTimestamp="2025-12-02 10:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:07.262071499 +0000 UTC m=+1176.971437936" watchObservedRunningTime="2025-12-02 10:33:07.292109365 +0000 UTC m=+1177.001475812" Dec 02 10:33:09 crc kubenswrapper[4711]: I1202 10:33:09.159308 4711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:33:09 crc kubenswrapper[4711]: I1202 10:33:09.159548 4711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:33:09 crc kubenswrapper[4711]: I1202 10:33:09.756117 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:33:09 crc kubenswrapper[4711]: I1202 10:33:09.756431 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:33:09 crc kubenswrapper[4711]: I1202 10:33:09.767581 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 10:33:09 crc kubenswrapper[4711]: I1202 10:33:09.780564 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 10:33:09 crc kubenswrapper[4711]: I1202 10:33:09.856613 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:33:09 crc kubenswrapper[4711]: I1202 10:33:09.856667 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:33:10 crc kubenswrapper[4711]: I1202 10:33:10.052557 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 10:33:10 crc kubenswrapper[4711]: I1202 10:33:10.184347 4711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:33:10 crc kubenswrapper[4711]: I1202 10:33:10.299987 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 10:33:10 crc kubenswrapper[4711]: I1202 10:33:10.555755 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:33:14 crc kubenswrapper[4711]: I1202 10:33:14.529184 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:14 crc kubenswrapper[4711]: I1202 10:33:14.600341 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ndg5k"] Dec 02 10:33:14 crc kubenswrapper[4711]: I1202 10:33:14.602253 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" podUID="c26200d5-5908-40af-89de-c219091721b5" containerName="dnsmasq-dns" containerID="cri-o://ceea0da7b494d6c75f0d5db008d711e7e45f9477875d3c9ddfab2c116ab22750" gracePeriod=10 Dec 02 10:33:14 crc kubenswrapper[4711]: I1202 10:33:14.869913 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:33:14 crc kubenswrapper[4711]: I1202 10:33:14.895338 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:33:14 crc kubenswrapper[4711]: I1202 10:33:14.899585 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ss597" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.014924 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4t96\" (UniqueName: \"kubernetes.io/projected/708582b5-ed1b-43e9-959a-482979700291-kube-api-access-z4t96\") pod \"708582b5-ed1b-43e9-959a-482979700291\" (UID: \"708582b5-ed1b-43e9-959a-482979700291\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.014990 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-config-data\") pod \"426ff483-f882-4d91-b5da-bab147d2886d\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015011 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb9lf\" (UniqueName: \"kubernetes.io/projected/426ff483-f882-4d91-b5da-bab147d2886d-kube-api-access-jb9lf\") pod \"426ff483-f882-4d91-b5da-bab147d2886d\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015066 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/708582b5-ed1b-43e9-959a-482979700291-db-sync-config-data\") pod \"708582b5-ed1b-43e9-959a-482979700291\" (UID: \"708582b5-ed1b-43e9-959a-482979700291\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015102 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-combined-ca-bundle\") pod \"426ff483-f882-4d91-b5da-bab147d2886d\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015194 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-scripts\") pod \"426ff483-f882-4d91-b5da-bab147d2886d\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015232 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-scripts\") pod \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015257 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426ff483-f882-4d91-b5da-bab147d2886d-logs\") pod \"426ff483-f882-4d91-b5da-bab147d2886d\" (UID: \"426ff483-f882-4d91-b5da-bab147d2886d\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015281 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-combined-ca-bundle\") pod \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015296 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-config-data\") pod \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015332 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-fernet-keys\") pod \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015361 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxd67\" (UniqueName: \"kubernetes.io/projected/5266d7e0-bd1b-4266-b8eb-af6080873ad5-kube-api-access-bxd67\") pod \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015397 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708582b5-ed1b-43e9-959a-482979700291-combined-ca-bundle\") pod \"708582b5-ed1b-43e9-959a-482979700291\" (UID: \"708582b5-ed1b-43e9-959a-482979700291\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015440 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-credential-keys\") pod \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\" (UID: \"5266d7e0-bd1b-4266-b8eb-af6080873ad5\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.015778 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/426ff483-f882-4d91-b5da-bab147d2886d-logs" (OuterVolumeSpecName: "logs") pod "426ff483-f882-4d91-b5da-bab147d2886d" (UID: "426ff483-f882-4d91-b5da-bab147d2886d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.022905 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5266d7e0-bd1b-4266-b8eb-af6080873ad5" (UID: "5266d7e0-bd1b-4266-b8eb-af6080873ad5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.023007 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-scripts" (OuterVolumeSpecName: "scripts") pod "426ff483-f882-4d91-b5da-bab147d2886d" (UID: "426ff483-f882-4d91-b5da-bab147d2886d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.024192 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/426ff483-f882-4d91-b5da-bab147d2886d-kube-api-access-jb9lf" (OuterVolumeSpecName: "kube-api-access-jb9lf") pod "426ff483-f882-4d91-b5da-bab147d2886d" (UID: "426ff483-f882-4d91-b5da-bab147d2886d"). InnerVolumeSpecName "kube-api-access-jb9lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.024265 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708582b5-ed1b-43e9-959a-482979700291-kube-api-access-z4t96" (OuterVolumeSpecName: "kube-api-access-z4t96") pod "708582b5-ed1b-43e9-959a-482979700291" (UID: "708582b5-ed1b-43e9-959a-482979700291"). InnerVolumeSpecName "kube-api-access-z4t96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.025249 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708582b5-ed1b-43e9-959a-482979700291-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "708582b5-ed1b-43e9-959a-482979700291" (UID: "708582b5-ed1b-43e9-959a-482979700291"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.030475 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5266d7e0-bd1b-4266-b8eb-af6080873ad5" (UID: "5266d7e0-bd1b-4266-b8eb-af6080873ad5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.031324 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-scripts" (OuterVolumeSpecName: "scripts") pod "5266d7e0-bd1b-4266-b8eb-af6080873ad5" (UID: "5266d7e0-bd1b-4266-b8eb-af6080873ad5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.031470 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5266d7e0-bd1b-4266-b8eb-af6080873ad5-kube-api-access-bxd67" (OuterVolumeSpecName: "kube-api-access-bxd67") pod "5266d7e0-bd1b-4266-b8eb-af6080873ad5" (UID: "5266d7e0-bd1b-4266-b8eb-af6080873ad5"). InnerVolumeSpecName "kube-api-access-bxd67". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.047072 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-config-data" (OuterVolumeSpecName: "config-data") pod "5266d7e0-bd1b-4266-b8eb-af6080873ad5" (UID: "5266d7e0-bd1b-4266-b8eb-af6080873ad5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.052339 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5266d7e0-bd1b-4266-b8eb-af6080873ad5" (UID: "5266d7e0-bd1b-4266-b8eb-af6080873ad5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.055550 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708582b5-ed1b-43e9-959a-482979700291-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "708582b5-ed1b-43e9-959a-482979700291" (UID: "708582b5-ed1b-43e9-959a-482979700291"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.057044 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "426ff483-f882-4d91-b5da-bab147d2886d" (UID: "426ff483-f882-4d91-b5da-bab147d2886d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.061012 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-config-data" (OuterVolumeSpecName: "config-data") pod "426ff483-f882-4d91-b5da-bab147d2886d" (UID: "426ff483-f882-4d91-b5da-bab147d2886d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117522 4711 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117559 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4t96\" (UniqueName: \"kubernetes.io/projected/708582b5-ed1b-43e9-959a-482979700291-kube-api-access-z4t96\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117571 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117581 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb9lf\" (UniqueName: \"kubernetes.io/projected/426ff483-f882-4d91-b5da-bab147d2886d-kube-api-access-jb9lf\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117591 4711 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/708582b5-ed1b-43e9-959a-482979700291-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117601 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117609 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426ff483-f882-4d91-b5da-bab147d2886d-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117618 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117626 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426ff483-f882-4d91-b5da-bab147d2886d-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117633 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117641 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117648 4711 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5266d7e0-bd1b-4266-b8eb-af6080873ad5-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117656 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxd67\" (UniqueName: \"kubernetes.io/projected/5266d7e0-bd1b-4266-b8eb-af6080873ad5-kube-api-access-bxd67\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.117665 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708582b5-ed1b-43e9-959a-482979700291-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.241296 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ss597" event={"ID":"426ff483-f882-4d91-b5da-bab147d2886d","Type":"ContainerDied","Data":"cf7d26984a1f0fe9b4c9d4a717aad644245adf3c113e8bee60aced83280d867f"} Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.241337 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf7d26984a1f0fe9b4c9d4a717aad644245adf3c113e8bee60aced83280d867f" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.241341 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ss597" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.244885 4711 generic.go:334] "Generic (PLEG): container finished" podID="c26200d5-5908-40af-89de-c219091721b5" containerID="ceea0da7b494d6c75f0d5db008d711e7e45f9477875d3c9ddfab2c116ab22750" exitCode=0 Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.244997 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" event={"ID":"c26200d5-5908-40af-89de-c219091721b5","Type":"ContainerDied","Data":"ceea0da7b494d6c75f0d5db008d711e7e45f9477875d3c9ddfab2c116ab22750"} Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.247764 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pk6jj" event={"ID":"5266d7e0-bd1b-4266-b8eb-af6080873ad5","Type":"ContainerDied","Data":"4d8350de2130d5e51354bead2142f707d90e291628fa2f795af85c56b3845286"} Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.248010 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d8350de2130d5e51354bead2142f707d90e291628fa2f795af85c56b3845286" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.247843 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pk6jj" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.249586 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xfj2j" event={"ID":"708582b5-ed1b-43e9-959a-482979700291","Type":"ContainerDied","Data":"fb400266108c018ccbee54424abae20f425777c13b86a4befc42a1d31c4c06fd"} Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.249620 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb400266108c018ccbee54424abae20f425777c13b86a4befc42a1d31c4c06fd" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.249656 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xfj2j" Dec 02 10:33:15 crc kubenswrapper[4711]: W1202 10:33:15.270603 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod886c7d1f_5204_436e_a656_68b1ac98b586.slice/crio-d56be21d24476f24d02040c1e6be01542d889e84b00f7923d3e93e5e7b144c22 WatchSource:0}: Error finding container d56be21d24476f24d02040c1e6be01542d889e84b00f7923d3e93e5e7b144c22: Status 404 returned error can't find the container with id d56be21d24476f24d02040c1e6be01542d889e84b00f7923d3e93e5e7b144c22 Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.281808 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b445d9db9-64xt2"] Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.632884 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.731160 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-dns-swift-storage-0\") pod \"c26200d5-5908-40af-89de-c219091721b5\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.731365 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-ovsdbserver-sb\") pod \"c26200d5-5908-40af-89de-c219091721b5\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.731517 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-ovsdbserver-nb\") pod \"c26200d5-5908-40af-89de-c219091721b5\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.731593 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-config\") pod \"c26200d5-5908-40af-89de-c219091721b5\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.731685 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-dns-svc\") pod \"c26200d5-5908-40af-89de-c219091721b5\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.731794 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tkqp\" (UniqueName: \"kubernetes.io/projected/c26200d5-5908-40af-89de-c219091721b5-kube-api-access-7tkqp\") pod \"c26200d5-5908-40af-89de-c219091721b5\" (UID: \"c26200d5-5908-40af-89de-c219091721b5\") " Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.736229 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26200d5-5908-40af-89de-c219091721b5-kube-api-access-7tkqp" (OuterVolumeSpecName: "kube-api-access-7tkqp") pod "c26200d5-5908-40af-89de-c219091721b5" (UID: "c26200d5-5908-40af-89de-c219091721b5"). InnerVolumeSpecName "kube-api-access-7tkqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.776707 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c26200d5-5908-40af-89de-c219091721b5" (UID: "c26200d5-5908-40af-89de-c219091721b5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.780925 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-config" (OuterVolumeSpecName: "config") pod "c26200d5-5908-40af-89de-c219091721b5" (UID: "c26200d5-5908-40af-89de-c219091721b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.786709 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c26200d5-5908-40af-89de-c219091721b5" (UID: "c26200d5-5908-40af-89de-c219091721b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.787243 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c26200d5-5908-40af-89de-c219091721b5" (UID: "c26200d5-5908-40af-89de-c219091721b5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.806496 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c26200d5-5908-40af-89de-c219091721b5" (UID: "c26200d5-5908-40af-89de-c219091721b5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.833979 4711 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.834019 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.834031 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.834042 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.834056 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26200d5-5908-40af-89de-c219091721b5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.834069 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tkqp\" (UniqueName: \"kubernetes.io/projected/c26200d5-5908-40af-89de-c219091721b5-kube-api-access-7tkqp\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.987933 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6986b467dd-l4plx"] Dec 02 10:33:15 crc kubenswrapper[4711]: E1202 10:33:15.988404 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426ff483-f882-4d91-b5da-bab147d2886d" containerName="placement-db-sync" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.988431 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="426ff483-f882-4d91-b5da-bab147d2886d" containerName="placement-db-sync" Dec 02 10:33:15 crc kubenswrapper[4711]: E1202 10:33:15.988451 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708582b5-ed1b-43e9-959a-482979700291" containerName="barbican-db-sync" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.988461 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="708582b5-ed1b-43e9-959a-482979700291" containerName="barbican-db-sync" Dec 02 10:33:15 crc kubenswrapper[4711]: E1202 10:33:15.988491 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5266d7e0-bd1b-4266-b8eb-af6080873ad5" containerName="keystone-bootstrap" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.988503 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="5266d7e0-bd1b-4266-b8eb-af6080873ad5" containerName="keystone-bootstrap" Dec 02 10:33:15 crc kubenswrapper[4711]: E1202 10:33:15.988521 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26200d5-5908-40af-89de-c219091721b5" containerName="dnsmasq-dns" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.988530 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26200d5-5908-40af-89de-c219091721b5" containerName="dnsmasq-dns" Dec 02 10:33:15 crc kubenswrapper[4711]: E1202 10:33:15.988565 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26200d5-5908-40af-89de-c219091721b5" containerName="init" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.988574 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26200d5-5908-40af-89de-c219091721b5" containerName="init" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.988794 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26200d5-5908-40af-89de-c219091721b5" containerName="dnsmasq-dns" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.988823 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="426ff483-f882-4d91-b5da-bab147d2886d" containerName="placement-db-sync" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.988849 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="708582b5-ed1b-43e9-959a-482979700291" containerName="barbican-db-sync" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.988873 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="5266d7e0-bd1b-4266-b8eb-af6080873ad5" containerName="keystone-bootstrap" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.989628 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.993108 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.996661 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.996671 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6k6c6" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.996715 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.996992 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6986b467dd-l4plx"] Dec 02 10:33:15 crc kubenswrapper[4711]: I1202 10:33:15.998317 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.010324 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.100927 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6545f6547b-92nrg"] Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.102779 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.106774 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.107295 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sdcct" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.107667 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.108364 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.113845 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.121016 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6545f6547b-92nrg"] Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.141900 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-fernet-keys\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.141986 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-public-tls-certs\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.142036 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-config-data\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.142071 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wczhp\" (UniqueName: \"kubernetes.io/projected/805add98-0168-44c8-a35c-dfdd1709a8ae-kube-api-access-wczhp\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.142091 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-scripts\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.142165 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-internal-tls-certs\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.142181 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-combined-ca-bundle\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.142203 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-credential-keys\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.197830 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7d4b47d497-gjzqt"] Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.199339 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.202550 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.202736 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dp74j" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.202833 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.232093 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d4b47d497-gjzqt"] Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.238437 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t"] Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.239841 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.243922 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.245605 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-combined-ca-bundle\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.245694 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-config-data\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.245756 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34890e20-861e-4023-8029-aff08285be51-logs\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.245812 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-internal-tls-certs\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.245831 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-combined-ca-bundle\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.245852 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-credential-keys\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.245889 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-public-tls-certs\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.245917 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-fernet-keys\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.245985 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-public-tls-certs\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.246010 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-scripts\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.246067 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-internal-tls-certs\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.246086 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-config-data\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.246124 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgmbw\" (UniqueName: \"kubernetes.io/projected/34890e20-861e-4023-8029-aff08285be51-kube-api-access-lgmbw\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.246161 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wczhp\" (UniqueName: \"kubernetes.io/projected/805add98-0168-44c8-a35c-dfdd1709a8ae-kube-api-access-wczhp\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.246204 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-scripts\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.251799 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-combined-ca-bundle\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.251890 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t"] Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.263849 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-scripts\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.265209 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-internal-tls-certs\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.268111 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-public-tls-certs\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.271632 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-credential-keys\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.271743 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-config-data\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.272721 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/805add98-0168-44c8-a35c-dfdd1709a8ae-fernet-keys\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.288850 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wczhp\" (UniqueName: \"kubernetes.io/projected/805add98-0168-44c8-a35c-dfdd1709a8ae-kube-api-access-wczhp\") pod \"keystone-6986b467dd-l4plx\" (UID: \"805add98-0168-44c8-a35c-dfdd1709a8ae\") " pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.290718 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b445d9db9-64xt2" event={"ID":"886c7d1f-5204-436e-a656-68b1ac98b586","Type":"ContainerStarted","Data":"d0ddf002f4b867c93764b85c3804e13e8770e7c26bd2b6b58261f53854de3016"} Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.290805 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b445d9db9-64xt2" event={"ID":"886c7d1f-5204-436e-a656-68b1ac98b586","Type":"ContainerStarted","Data":"d56be21d24476f24d02040c1e6be01542d889e84b00f7923d3e93e5e7b144c22"} Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.310047 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.326998 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" event={"ID":"c26200d5-5908-40af-89de-c219091721b5","Type":"ContainerDied","Data":"6ee340f7da0f9c6721092fd84e7571332e0cf882c799aa137f6b70293b40d180"} Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.327053 4711 scope.go:117] "RemoveContainer" containerID="ceea0da7b494d6c75f0d5db008d711e7e45f9477875d3c9ddfab2c116ab22750" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.327232 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-ndg5k" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.341894 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pmlwz"] Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.344425 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347134 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-logs\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347180 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-scripts\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347204 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz9pn\" (UniqueName: \"kubernetes.io/projected/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-kube-api-access-mz9pn\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347229 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-internal-tls-certs\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347245 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-combined-ca-bundle\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347266 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgmbw\" (UniqueName: \"kubernetes.io/projected/34890e20-861e-4023-8029-aff08285be51-kube-api-access-lgmbw\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347300 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cacc030-0a08-4dab-96e4-a024aa16faa6-config-data\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347317 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cacc030-0a08-4dab-96e4-a024aa16faa6-logs\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347342 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-config-data\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347358 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-combined-ca-bundle\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347391 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kq5v\" (UniqueName: \"kubernetes.io/projected/2cacc030-0a08-4dab-96e4-a024aa16faa6-kube-api-access-7kq5v\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347414 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-config-data\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347449 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-config-data-custom\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347470 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34890e20-861e-4023-8029-aff08285be51-logs\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347490 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-public-tls-certs\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347514 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cacc030-0a08-4dab-96e4-a024aa16faa6-config-data-custom\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.347540 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cacc030-0a08-4dab-96e4-a024aa16faa6-combined-ca-bundle\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.352453 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pmlwz"] Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.352853 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34890e20-861e-4023-8029-aff08285be51-logs\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.359159 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-public-tls-certs\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.367693 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-scripts\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.368391 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-internal-tls-certs\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.368675 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-combined-ca-bundle\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.373762 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34890e20-861e-4023-8029-aff08285be51-config-data\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.391649 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgmbw\" (UniqueName: \"kubernetes.io/projected/34890e20-861e-4023-8029-aff08285be51-kube-api-access-lgmbw\") pod \"placement-6545f6547b-92nrg\" (UID: \"34890e20-861e-4023-8029-aff08285be51\") " pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.412064 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ndg5k"] Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.426714 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7759cb7c46-754xb"] Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.428690 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.430702 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.435272 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.444348 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-ndg5k"] Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.449918 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz9pn\" (UniqueName: \"kubernetes.io/projected/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-kube-api-access-mz9pn\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451127 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-config\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451201 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-combined-ca-bundle\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451325 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cacc030-0a08-4dab-96e4-a024aa16faa6-config-data\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451355 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cacc030-0a08-4dab-96e4-a024aa16faa6-logs\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451381 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-config-data\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451455 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451497 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451545 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kq5v\" (UniqueName: \"kubernetes.io/projected/2cacc030-0a08-4dab-96e4-a024aa16faa6-kube-api-access-7kq5v\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451637 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451659 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-config-data-custom\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451717 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7wdq\" (UniqueName: \"kubernetes.io/projected/8008b774-a8f4-426e-b0f8-7073e53c9fae-kube-api-access-w7wdq\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451753 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cacc030-0a08-4dab-96e4-a024aa16faa6-config-data-custom\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451826 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cacc030-0a08-4dab-96e4-a024aa16faa6-combined-ca-bundle\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451857 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.451890 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-logs\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.452320 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-logs\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.453217 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7759cb7c46-754xb"] Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.453600 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cacc030-0a08-4dab-96e4-a024aa16faa6-logs\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.456738 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cacc030-0a08-4dab-96e4-a024aa16faa6-config-data\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.458247 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-combined-ca-bundle\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.458673 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cacc030-0a08-4dab-96e4-a024aa16faa6-combined-ca-bundle\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.460177 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-config-data\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.460832 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-config-data-custom\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.462636 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cacc030-0a08-4dab-96e4-a024aa16faa6-config-data-custom\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.469732 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz9pn\" (UniqueName: \"kubernetes.io/projected/c55fa7c4-9945-4651-bf4b-9ad1b94e6047-kube-api-access-mz9pn\") pod \"barbican-worker-7d4b47d497-gjzqt\" (UID: \"c55fa7c4-9945-4651-bf4b-9ad1b94e6047\") " pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.472849 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kq5v\" (UniqueName: \"kubernetes.io/projected/2cacc030-0a08-4dab-96e4-a024aa16faa6-kube-api-access-7kq5v\") pod \"barbican-keystone-listener-7c5c78fc8b-7bz2t\" (UID: \"2cacc030-0a08-4dab-96e4-a024aa16faa6\") " pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.539160 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d4b47d497-gjzqt" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.559860 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7wdq\" (UniqueName: \"kubernetes.io/projected/8008b774-a8f4-426e-b0f8-7073e53c9fae-kube-api-access-w7wdq\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.559910 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdggq\" (UniqueName: \"kubernetes.io/projected/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-kube-api-access-jdggq\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.559960 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-logs\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.559984 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.560026 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-config\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.560070 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-combined-ca-bundle\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.560105 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-config-data\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.560151 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.560175 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.560194 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-config-data-custom\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.560244 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.560711 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-config\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.564317 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.564533 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.565113 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.565266 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.586892 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7wdq\" (UniqueName: \"kubernetes.io/projected/8008b774-a8f4-426e-b0f8-7073e53c9fae-kube-api-access-w7wdq\") pod \"dnsmasq-dns-848cf88cfc-pmlwz\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.642406 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.662177 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-combined-ca-bundle\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.662212 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-config-data\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.662266 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-config-data-custom\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.662334 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdggq\" (UniqueName: \"kubernetes.io/projected/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-kube-api-access-jdggq\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.662363 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-logs\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.662777 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-logs\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.668100 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-config-data\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.670486 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-config-data-custom\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.670958 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-combined-ca-bundle\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.674383 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.677854 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdggq\" (UniqueName: \"kubernetes.io/projected/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-kube-api-access-jdggq\") pod \"barbican-api-7759cb7c46-754xb\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:16 crc kubenswrapper[4711]: I1202 10:33:16.748532 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:17 crc kubenswrapper[4711]: I1202 10:33:17.070979 4711 scope.go:117] "RemoveContainer" containerID="bb3bf0f965ee0585c48653379d175351f07404f88bf00028c808bc47fe0ac08f" Dec 02 10:33:17 crc kubenswrapper[4711]: I1202 10:33:17.088854 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c26200d5-5908-40af-89de-c219091721b5" path="/var/lib/kubelet/pods/c26200d5-5908-40af-89de-c219091721b5/volumes" Dec 02 10:33:17 crc kubenswrapper[4711]: I1202 10:33:17.890944 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6986b467dd-l4plx"] Dec 02 10:33:17 crc kubenswrapper[4711]: I1202 10:33:17.910908 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pmlwz"] Dec 02 10:33:17 crc kubenswrapper[4711]: I1202 10:33:17.937414 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7759cb7c46-754xb"] Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.103192 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t"] Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.167461 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6545f6547b-92nrg"] Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.238001 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d4b47d497-gjzqt"] Dec 02 10:33:18 crc kubenswrapper[4711]: W1202 10:33:18.298102 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc55fa7c4_9945_4651_bf4b_9ad1b94e6047.slice/crio-88963c6238be58aeae04cd826db98bbfe06dbfdf9b68746d7f6fb83c37235a7f WatchSource:0}: Error finding container 88963c6238be58aeae04cd826db98bbfe06dbfdf9b68746d7f6fb83c37235a7f: Status 404 returned error can't find the container with id 88963c6238be58aeae04cd826db98bbfe06dbfdf9b68746d7f6fb83c37235a7f Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.371395 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" event={"ID":"2cacc030-0a08-4dab-96e4-a024aa16faa6","Type":"ContainerStarted","Data":"a77ba0fc55e6970868fe9e61d82c6515e22c616fe2a70e2da8b7d4007c6c151d"} Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.386556 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" event={"ID":"8008b774-a8f4-426e-b0f8-7073e53c9fae","Type":"ContainerStarted","Data":"d2496a79a6f2069b47d6267ee3428cda5565ecc545beb746d4bf0d8893e4557d"} Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.386641 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" event={"ID":"8008b774-a8f4-426e-b0f8-7073e53c9fae","Type":"ContainerStarted","Data":"b7433472abeb4218269b4f87a3a84f7bd574a49891badfba7ce34f5e33660b3f"} Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.400291 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98dbf68a-a027-4b09-a124-5438406d4b4f","Type":"ContainerStarted","Data":"ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6"} Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.402300 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b445d9db9-64xt2" event={"ID":"886c7d1f-5204-436e-a656-68b1ac98b586","Type":"ContainerStarted","Data":"c26773c418b5df5e50faf6eaae4c0678f1845d9fbc0e188b128d5e273f8e82ec"} Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.403385 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.406204 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d4b47d497-gjzqt" event={"ID":"c55fa7c4-9945-4651-bf4b-9ad1b94e6047","Type":"ContainerStarted","Data":"88963c6238be58aeae04cd826db98bbfe06dbfdf9b68746d7f6fb83c37235a7f"} Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.418578 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7759cb7c46-754xb" event={"ID":"db67a475-c964-4dd8-b9e0-2c5d198b7a3c","Type":"ContainerStarted","Data":"fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd"} Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.418633 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7759cb7c46-754xb" event={"ID":"db67a475-c964-4dd8-b9e0-2c5d198b7a3c","Type":"ContainerStarted","Data":"042e457ccc7d95b94f72de1bd0f318e7f54f8288a7ea6237aa3dcac440dfbf2b"} Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.443861 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b445d9db9-64xt2" podStartSLOduration=12.44384141 podStartE2EDuration="12.44384141s" podCreationTimestamp="2025-12-02 10:33:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:18.436164591 +0000 UTC m=+1188.145531038" watchObservedRunningTime="2025-12-02 10:33:18.44384141 +0000 UTC m=+1188.153207857" Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.463848 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6986b467dd-l4plx" event={"ID":"805add98-0168-44c8-a35c-dfdd1709a8ae","Type":"ContainerStarted","Data":"d9e418fcc897ef8158180f87cae20ed16043924d5d3b8b64d97ab3c12bc4e727"} Dec 02 10:33:18 crc kubenswrapper[4711]: I1202 10:33:18.472056 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6545f6547b-92nrg" event={"ID":"34890e20-861e-4023-8029-aff08285be51","Type":"ContainerStarted","Data":"66af427b237970e19135a36dea0147973bced0bfa4fa03a3e2fb7cd2c1b49e8b"} Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.050676 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-fddf747c8-8wktl"] Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.052854 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.055978 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.056072 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.114541 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fddf747c8-8wktl"] Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.226490 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/429cc017-c93c-4d8a-b5eb-819eb6fde287-logs\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.226542 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-public-tls-certs\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.226569 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-internal-tls-certs\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.226613 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-combined-ca-bundle\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.226647 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfw9z\" (UniqueName: \"kubernetes.io/projected/429cc017-c93c-4d8a-b5eb-819eb6fde287-kube-api-access-xfw9z\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.226663 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-config-data-custom\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.226688 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-config-data\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.328634 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/429cc017-c93c-4d8a-b5eb-819eb6fde287-logs\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.328713 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-public-tls-certs\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.328744 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-internal-tls-certs\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.328781 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-combined-ca-bundle\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.328806 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfw9z\" (UniqueName: \"kubernetes.io/projected/429cc017-c93c-4d8a-b5eb-819eb6fde287-kube-api-access-xfw9z\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.328823 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-config-data-custom\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.328845 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-config-data\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.329992 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/429cc017-c93c-4d8a-b5eb-819eb6fde287-logs\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.333681 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-config-data-custom\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.333768 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-combined-ca-bundle\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.334234 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-internal-tls-certs\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.345296 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-public-tls-certs\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.352200 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfw9z\" (UniqueName: \"kubernetes.io/projected/429cc017-c93c-4d8a-b5eb-819eb6fde287-kube-api-access-xfw9z\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.352428 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429cc017-c93c-4d8a-b5eb-819eb6fde287-config-data\") pod \"barbican-api-fddf747c8-8wktl\" (UID: \"429cc017-c93c-4d8a-b5eb-819eb6fde287\") " pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.393765 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.485368 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6986b467dd-l4plx" event={"ID":"805add98-0168-44c8-a35c-dfdd1709a8ae","Type":"ContainerStarted","Data":"1ff04b6b5467ffceaff5122a2383508908833aae0370e9f19746b658f593deb8"} Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.486575 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.502812 4711 generic.go:334] "Generic (PLEG): container finished" podID="8008b774-a8f4-426e-b0f8-7073e53c9fae" containerID="d2496a79a6f2069b47d6267ee3428cda5565ecc545beb746d4bf0d8893e4557d" exitCode=0 Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.504094 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" event={"ID":"8008b774-a8f4-426e-b0f8-7073e53c9fae","Type":"ContainerDied","Data":"d2496a79a6f2069b47d6267ee3428cda5565ecc545beb746d4bf0d8893e4557d"} Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.519035 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6986b467dd-l4plx" podStartSLOduration=4.518934133 podStartE2EDuration="4.518934133s" podCreationTimestamp="2025-12-02 10:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:19.510444512 +0000 UTC m=+1189.219810959" watchObservedRunningTime="2025-12-02 10:33:19.518934133 +0000 UTC m=+1189.228300580" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.757821 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76998c6f5b-xhr78" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 02 10:33:19 crc kubenswrapper[4711]: I1202 10:33:19.864764 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b4d9565bd-5nwjn" podUID="a5e4731d-0cea-4530-aba2-86777a8db6cb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.133145 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fddf747c8-8wktl"] Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.540490 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" event={"ID":"8008b774-a8f4-426e-b0f8-7073e53c9fae","Type":"ContainerStarted","Data":"944f90ec01056ec389a89e9f186c94b0482a5f9fb83134e0a0b872e7a4f9f32a"} Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.540796 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.550735 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fddf747c8-8wktl" event={"ID":"429cc017-c93c-4d8a-b5eb-819eb6fde287","Type":"ContainerStarted","Data":"6ed963213cb9ea6eb8cdedf965fd87e2f239f6720b6f90f844e6e2521692b8ad"} Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.550778 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fddf747c8-8wktl" event={"ID":"429cc017-c93c-4d8a-b5eb-819eb6fde287","Type":"ContainerStarted","Data":"f6d9501694d20bb26fb6ffec2dfd86b8fd131489a80965a7c5b48a6da13fa00a"} Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.573019 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hhnhk" event={"ID":"c36d7741-4744-4076-ad79-2cd1aca48cec","Type":"ContainerStarted","Data":"d8df2dc80ebac905a5058820497382d476f98341e2433c0227b7a47c6d1c8abc"} Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.581138 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7759cb7c46-754xb" event={"ID":"db67a475-c964-4dd8-b9e0-2c5d198b7a3c","Type":"ContainerStarted","Data":"0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7"} Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.582588 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.582702 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.584363 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" podStartSLOduration=4.584338551 podStartE2EDuration="4.584338551s" podCreationTimestamp="2025-12-02 10:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:20.574314119 +0000 UTC m=+1190.283680556" watchObservedRunningTime="2025-12-02 10:33:20.584338551 +0000 UTC m=+1190.293704998" Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.597220 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6545f6547b-92nrg" event={"ID":"34890e20-861e-4023-8029-aff08285be51","Type":"ContainerStarted","Data":"c3043be0f0d7f2e0ec91f51d0afe4c4f16d3881fb4e4da35a483dfc942adabbb"} Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.600790 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-hhnhk" podStartSLOduration=5.99261319 podStartE2EDuration="51.600766597s" podCreationTimestamp="2025-12-02 10:32:29 +0000 UTC" firstStartedPulling="2025-12-02 10:32:31.606056851 +0000 UTC m=+1141.315423298" lastFinishedPulling="2025-12-02 10:33:17.214210258 +0000 UTC m=+1186.923576705" observedRunningTime="2025-12-02 10:33:20.595771462 +0000 UTC m=+1190.305137919" watchObservedRunningTime="2025-12-02 10:33:20.600766597 +0000 UTC m=+1190.310133044" Dec 02 10:33:20 crc kubenswrapper[4711]: I1202 10:33:20.618827 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7759cb7c46-754xb" podStartSLOduration=4.618802348 podStartE2EDuration="4.618802348s" podCreationTimestamp="2025-12-02 10:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:20.613654888 +0000 UTC m=+1190.323021335" watchObservedRunningTime="2025-12-02 10:33:20.618802348 +0000 UTC m=+1190.328168795" Dec 02 10:33:21 crc kubenswrapper[4711]: I1202 10:33:21.628944 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fddf747c8-8wktl" event={"ID":"429cc017-c93c-4d8a-b5eb-819eb6fde287","Type":"ContainerStarted","Data":"db2db03d68830a2257536f44ff954e73eba713cd99fd8b744636b031760e8bee"} Dec 02 10:33:21 crc kubenswrapper[4711]: I1202 10:33:21.629668 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:21 crc kubenswrapper[4711]: I1202 10:33:21.629693 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:21 crc kubenswrapper[4711]: I1202 10:33:21.637394 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6545f6547b-92nrg" event={"ID":"34890e20-861e-4023-8029-aff08285be51","Type":"ContainerStarted","Data":"719ae26bd606bc9743de774f76b5eb7ced75d1fce83b8ffa0730f11509047ed3"} Dec 02 10:33:21 crc kubenswrapper[4711]: I1202 10:33:21.637435 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:21 crc kubenswrapper[4711]: I1202 10:33:21.638018 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:21 crc kubenswrapper[4711]: I1202 10:33:21.656843 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-fddf747c8-8wktl" podStartSLOduration=2.656824273 podStartE2EDuration="2.656824273s" podCreationTimestamp="2025-12-02 10:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:21.655718242 +0000 UTC m=+1191.365084709" watchObservedRunningTime="2025-12-02 10:33:21.656824273 +0000 UTC m=+1191.366190730" Dec 02 10:33:21 crc kubenswrapper[4711]: I1202 10:33:21.697971 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6545f6547b-92nrg" podStartSLOduration=5.69793468 podStartE2EDuration="5.69793468s" podCreationTimestamp="2025-12-02 10:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:21.687745893 +0000 UTC m=+1191.397112350" watchObservedRunningTime="2025-12-02 10:33:21.69793468 +0000 UTC m=+1191.407301147" Dec 02 10:33:22 crc kubenswrapper[4711]: I1202 10:33:22.585878 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:33:22 crc kubenswrapper[4711]: I1202 10:33:22.586451 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:33:22 crc kubenswrapper[4711]: I1202 10:33:22.586523 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:33:22 crc kubenswrapper[4711]: I1202 10:33:22.587471 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b9ab21e8bb7413840e645c998ba8a37411c45606ceeecfb5d6d1574a7966068"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:33:22 crc kubenswrapper[4711]: I1202 10:33:22.587569 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://7b9ab21e8bb7413840e645c998ba8a37411c45606ceeecfb5d6d1574a7966068" gracePeriod=600 Dec 02 10:33:22 crc kubenswrapper[4711]: I1202 10:33:22.648995 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" event={"ID":"2cacc030-0a08-4dab-96e4-a024aa16faa6","Type":"ContainerStarted","Data":"91ab1264f2e515b430f85d4d19be7d47cc7ffb23918eef5cd9e40c0e0ab2b40b"} Dec 02 10:33:22 crc kubenswrapper[4711]: I1202 10:33:22.651900 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d4b47d497-gjzqt" event={"ID":"c55fa7c4-9945-4651-bf4b-9ad1b94e6047","Type":"ContainerStarted","Data":"74664800cfe8376b633a8a6499396f913e60e121b5b72fe7a47093f14caddb7e"} Dec 02 10:33:23 crc kubenswrapper[4711]: I1202 10:33:23.672171 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d4b47d497-gjzqt" event={"ID":"c55fa7c4-9945-4651-bf4b-9ad1b94e6047","Type":"ContainerStarted","Data":"9cc14f45b1f8c3710f21ea9f8a243de1aa2f5ee15e82ab350589a47c43f42487"} Dec 02 10:33:23 crc kubenswrapper[4711]: I1202 10:33:23.678576 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="7b9ab21e8bb7413840e645c998ba8a37411c45606ceeecfb5d6d1574a7966068" exitCode=0 Dec 02 10:33:23 crc kubenswrapper[4711]: I1202 10:33:23.678643 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"7b9ab21e8bb7413840e645c998ba8a37411c45606ceeecfb5d6d1574a7966068"} Dec 02 10:33:23 crc kubenswrapper[4711]: I1202 10:33:23.678713 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"aa264d5b0f373424df2b67d6e79de2f6c80da037caa0a7a377debbcb2ad5e375"} Dec 02 10:33:23 crc kubenswrapper[4711]: I1202 10:33:23.678766 4711 scope.go:117] "RemoveContainer" containerID="8c4568791abe9bd7256ecd483bef73160af4505d06199fa89bd749115edf5f3a" Dec 02 10:33:23 crc kubenswrapper[4711]: I1202 10:33:23.681281 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" event={"ID":"2cacc030-0a08-4dab-96e4-a024aa16faa6","Type":"ContainerStarted","Data":"a59bcafce59fefba2cb73a043c7df3b044cc4c44e54410a960a93a0c8b64fcd9"} Dec 02 10:33:23 crc kubenswrapper[4711]: I1202 10:33:23.706401 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7d4b47d497-gjzqt" podStartSLOduration=4.026287654 podStartE2EDuration="7.706365111s" podCreationTimestamp="2025-12-02 10:33:16 +0000 UTC" firstStartedPulling="2025-12-02 10:33:18.299857147 +0000 UTC m=+1188.009223594" lastFinishedPulling="2025-12-02 10:33:21.979934604 +0000 UTC m=+1191.689301051" observedRunningTime="2025-12-02 10:33:23.692175025 +0000 UTC m=+1193.401541482" watchObservedRunningTime="2025-12-02 10:33:23.706365111 +0000 UTC m=+1193.415731559" Dec 02 10:33:23 crc kubenswrapper[4711]: I1202 10:33:23.732710 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7c5c78fc8b-7bz2t" podStartSLOduration=3.921012232 podStartE2EDuration="7.732668356s" podCreationTimestamp="2025-12-02 10:33:16 +0000 UTC" firstStartedPulling="2025-12-02 10:33:18.16090095 +0000 UTC m=+1187.870267397" lastFinishedPulling="2025-12-02 10:33:21.972557074 +0000 UTC m=+1191.681923521" observedRunningTime="2025-12-02 10:33:23.727008652 +0000 UTC m=+1193.436375099" watchObservedRunningTime="2025-12-02 10:33:23.732668356 +0000 UTC m=+1193.442034803" Dec 02 10:33:26 crc kubenswrapper[4711]: I1202 10:33:26.676257 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:26 crc kubenswrapper[4711]: I1202 10:33:26.746491 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-jvjm2"] Dec 02 10:33:26 crc kubenswrapper[4711]: I1202 10:33:26.746763 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" podUID="c2a80451-d670-436c-9da0-20e3aec8e2ad" containerName="dnsmasq-dns" containerID="cri-o://f48da6b8eb9365751b6196964e060694bcc396d60ce033a727cf4a1fcea6cf8d" gracePeriod=10 Dec 02 10:33:26 crc kubenswrapper[4711]: I1202 10:33:26.748253 4711 generic.go:334] "Generic (PLEG): container finished" podID="c36d7741-4744-4076-ad79-2cd1aca48cec" containerID="d8df2dc80ebac905a5058820497382d476f98341e2433c0227b7a47c6d1c8abc" exitCode=0 Dec 02 10:33:26 crc kubenswrapper[4711]: I1202 10:33:26.748301 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hhnhk" event={"ID":"c36d7741-4744-4076-ad79-2cd1aca48cec","Type":"ContainerDied","Data":"d8df2dc80ebac905a5058820497382d476f98341e2433c0227b7a47c6d1c8abc"} Dec 02 10:33:27 crc kubenswrapper[4711]: I1202 10:33:27.770021 4711 generic.go:334] "Generic (PLEG): container finished" podID="c2a80451-d670-436c-9da0-20e3aec8e2ad" containerID="f48da6b8eb9365751b6196964e060694bcc396d60ce033a727cf4a1fcea6cf8d" exitCode=0 Dec 02 10:33:27 crc kubenswrapper[4711]: I1202 10:33:27.770151 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" event={"ID":"c2a80451-d670-436c-9da0-20e3aec8e2ad","Type":"ContainerDied","Data":"f48da6b8eb9365751b6196964e060694bcc396d60ce033a727cf4a1fcea6cf8d"} Dec 02 10:33:28 crc kubenswrapper[4711]: I1202 10:33:28.308375 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:28 crc kubenswrapper[4711]: I1202 10:33:28.354511 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.081987 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.249258 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c36d7741-4744-4076-ad79-2cd1aca48cec-etc-machine-id\") pod \"c36d7741-4744-4076-ad79-2cd1aca48cec\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.249374 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36d7741-4744-4076-ad79-2cd1aca48cec-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c36d7741-4744-4076-ad79-2cd1aca48cec" (UID: "c36d7741-4744-4076-ad79-2cd1aca48cec"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.249700 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-scripts\") pod \"c36d7741-4744-4076-ad79-2cd1aca48cec\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.249768 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-combined-ca-bundle\") pod \"c36d7741-4744-4076-ad79-2cd1aca48cec\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.249793 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf2tt\" (UniqueName: \"kubernetes.io/projected/c36d7741-4744-4076-ad79-2cd1aca48cec-kube-api-access-jf2tt\") pod \"c36d7741-4744-4076-ad79-2cd1aca48cec\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.249972 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-db-sync-config-data\") pod \"c36d7741-4744-4076-ad79-2cd1aca48cec\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.250001 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-config-data\") pod \"c36d7741-4744-4076-ad79-2cd1aca48cec\" (UID: \"c36d7741-4744-4076-ad79-2cd1aca48cec\") " Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.250512 4711 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c36d7741-4744-4076-ad79-2cd1aca48cec-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.258103 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c36d7741-4744-4076-ad79-2cd1aca48cec" (UID: "c36d7741-4744-4076-ad79-2cd1aca48cec"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.258116 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36d7741-4744-4076-ad79-2cd1aca48cec-kube-api-access-jf2tt" (OuterVolumeSpecName: "kube-api-access-jf2tt") pod "c36d7741-4744-4076-ad79-2cd1aca48cec" (UID: "c36d7741-4744-4076-ad79-2cd1aca48cec"). InnerVolumeSpecName "kube-api-access-jf2tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.258120 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-scripts" (OuterVolumeSpecName: "scripts") pod "c36d7741-4744-4076-ad79-2cd1aca48cec" (UID: "c36d7741-4744-4076-ad79-2cd1aca48cec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.284696 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c36d7741-4744-4076-ad79-2cd1aca48cec" (UID: "c36d7741-4744-4076-ad79-2cd1aca48cec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.300719 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.348573 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-config-data" (OuterVolumeSpecName: "config-data") pod "c36d7741-4744-4076-ad79-2cd1aca48cec" (UID: "c36d7741-4744-4076-ad79-2cd1aca48cec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.353205 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.353247 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.353264 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf2tt\" (UniqueName: \"kubernetes.io/projected/c36d7741-4744-4076-ad79-2cd1aca48cec-kube-api-access-jf2tt\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.353276 4711 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.353286 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36d7741-4744-4076-ad79-2cd1aca48cec-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:29 crc kubenswrapper[4711]: E1202 10:33:29.354741 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.454146 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-ovsdbserver-nb\") pod \"c2a80451-d670-436c-9da0-20e3aec8e2ad\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.454255 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55kg9\" (UniqueName: \"kubernetes.io/projected/c2a80451-d670-436c-9da0-20e3aec8e2ad-kube-api-access-55kg9\") pod \"c2a80451-d670-436c-9da0-20e3aec8e2ad\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.454280 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-dns-svc\") pod \"c2a80451-d670-436c-9da0-20e3aec8e2ad\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.454301 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-dns-swift-storage-0\") pod \"c2a80451-d670-436c-9da0-20e3aec8e2ad\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.454350 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-config\") pod \"c2a80451-d670-436c-9da0-20e3aec8e2ad\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.454453 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-ovsdbserver-sb\") pod \"c2a80451-d670-436c-9da0-20e3aec8e2ad\" (UID: \"c2a80451-d670-436c-9da0-20e3aec8e2ad\") " Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.475149 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2a80451-d670-436c-9da0-20e3aec8e2ad-kube-api-access-55kg9" (OuterVolumeSpecName: "kube-api-access-55kg9") pod "c2a80451-d670-436c-9da0-20e3aec8e2ad" (UID: "c2a80451-d670-436c-9da0-20e3aec8e2ad"). InnerVolumeSpecName "kube-api-access-55kg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.510821 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2a80451-d670-436c-9da0-20e3aec8e2ad" (UID: "c2a80451-d670-436c-9da0-20e3aec8e2ad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.510848 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2a80451-d670-436c-9da0-20e3aec8e2ad" (UID: "c2a80451-d670-436c-9da0-20e3aec8e2ad"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.514177 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-config" (OuterVolumeSpecName: "config") pod "c2a80451-d670-436c-9da0-20e3aec8e2ad" (UID: "c2a80451-d670-436c-9da0-20e3aec8e2ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.517739 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c2a80451-d670-436c-9da0-20e3aec8e2ad" (UID: "c2a80451-d670-436c-9da0-20e3aec8e2ad"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.527175 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2a80451-d670-436c-9da0-20e3aec8e2ad" (UID: "c2a80451-d670-436c-9da0-20e3aec8e2ad"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.556495 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.556523 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.556532 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.556540 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55kg9\" (UniqueName: \"kubernetes.io/projected/c2a80451-d670-436c-9da0-20e3aec8e2ad-kube-api-access-55kg9\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.556550 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.556559 4711 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2a80451-d670-436c-9da0-20e3aec8e2ad-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.805070 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98dbf68a-a027-4b09-a124-5438406d4b4f","Type":"ContainerStarted","Data":"0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45"} Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.805747 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.805733 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerName="ceilometer-notification-agent" containerID="cri-o://4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff" gracePeriod=30 Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.806171 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerName="proxy-httpd" containerID="cri-o://0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45" gracePeriod=30 Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.806281 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerName="sg-core" containerID="cri-o://ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6" gracePeriod=30 Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.820807 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hhnhk" event={"ID":"c36d7741-4744-4076-ad79-2cd1aca48cec","Type":"ContainerDied","Data":"e26f446bcb04b3f808f0b4afcc70bb30054b40769c5cb61b264416dc8a1d7344"} Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.820894 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e26f446bcb04b3f808f0b4afcc70bb30054b40769c5cb61b264416dc8a1d7344" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.821039 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hhnhk" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.847750 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" event={"ID":"c2a80451-d670-436c-9da0-20e3aec8e2ad","Type":"ContainerDied","Data":"8233e68325d49b1a616707b6115119ea13c558a3642d5fcf4facc13195354585"} Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.847917 4711 scope.go:117] "RemoveContainer" containerID="f48da6b8eb9365751b6196964e060694bcc396d60ce033a727cf4a1fcea6cf8d" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.848166 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-jvjm2" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.892362 4711 scope.go:117] "RemoveContainer" containerID="13f6e115c5c98c7af32a89dd1aa6aff2c5fb3c0deb9942298a086d8a09c382f9" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.901538 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-jvjm2"] Dec 02 10:33:29 crc kubenswrapper[4711]: E1202 10:33:29.908721 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc36d7741_4744_4076_ad79_2cd1aca48cec.slice/crio-e26f446bcb04b3f808f0b4afcc70bb30054b40769c5cb61b264416dc8a1d7344\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2a80451_d670_436c_9da0_20e3aec8e2ad.slice/crio-8233e68325d49b1a616707b6115119ea13c558a3642d5fcf4facc13195354585\": RecentStats: unable to find data in memory cache]" Dec 02 10:33:29 crc kubenswrapper[4711]: I1202 10:33:29.919920 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-jvjm2"] Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.439219 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:33:30 crc kubenswrapper[4711]: E1202 10:33:30.441457 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36d7741-4744-4076-ad79-2cd1aca48cec" containerName="cinder-db-sync" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.441485 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36d7741-4744-4076-ad79-2cd1aca48cec" containerName="cinder-db-sync" Dec 02 10:33:30 crc kubenswrapper[4711]: E1202 10:33:30.441522 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a80451-d670-436c-9da0-20e3aec8e2ad" containerName="init" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.441529 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a80451-d670-436c-9da0-20e3aec8e2ad" containerName="init" Dec 02 10:33:30 crc kubenswrapper[4711]: E1202 10:33:30.441548 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a80451-d670-436c-9da0-20e3aec8e2ad" containerName="dnsmasq-dns" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.441554 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a80451-d670-436c-9da0-20e3aec8e2ad" containerName="dnsmasq-dns" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.441714 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="c36d7741-4744-4076-ad79-2cd1aca48cec" containerName="cinder-db-sync" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.441740 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a80451-d670-436c-9da0-20e3aec8e2ad" containerName="dnsmasq-dns" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.442813 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.454443 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.467525 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.467803 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.467962 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cpm6m" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.467996 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.573176 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb54k\" (UniqueName: \"kubernetes.io/projected/ec53b7f7-0052-423b-b886-82861ff0d7fe-kube-api-access-vb54k\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.573540 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.573800 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.573851 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.574097 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec53b7f7-0052-423b-b886-82861ff0d7fe-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.574189 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.601467 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gjj5v"] Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.603067 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.621089 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gjj5v"] Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.676187 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.676461 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb54k\" (UniqueName: \"kubernetes.io/projected/ec53b7f7-0052-423b-b886-82861ff0d7fe-kube-api-access-vb54k\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.676578 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.676697 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.677026 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.677204 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec53b7f7-0052-423b-b886-82861ff0d7fe-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.677385 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec53b7f7-0052-423b-b886-82861ff0d7fe-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.685544 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.686216 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.687612 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.705570 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.716503 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb54k\" (UniqueName: \"kubernetes.io/projected/ec53b7f7-0052-423b-b886-82861ff0d7fe-kube-api-access-vb54k\") pod \"cinder-scheduler-0\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.778831 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-dns-svc\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.778874 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.778921 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxvkt\" (UniqueName: \"kubernetes.io/projected/f4b309e7-82c2-4d00-9f80-ff4789ddd307-kube-api-access-kxvkt\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.778957 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.779050 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.779101 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-config\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.800520 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.819892 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.821334 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.825662 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.829348 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.880898 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-dns-svc\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.880936 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.881001 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxvkt\" (UniqueName: \"kubernetes.io/projected/f4b309e7-82c2-4d00-9f80-ff4789ddd307-kube-api-access-kxvkt\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.881022 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.881049 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.881070 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-config\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.882425 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-config\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.882500 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-dns-svc\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.885299 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.886481 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.893979 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.897028 4711 generic.go:334] "Generic (PLEG): container finished" podID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerID="0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45" exitCode=0 Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.897063 4711 generic.go:334] "Generic (PLEG): container finished" podID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerID="ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6" exitCode=2 Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.897087 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98dbf68a-a027-4b09-a124-5438406d4b4f","Type":"ContainerDied","Data":"0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45"} Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.897127 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98dbf68a-a027-4b09-a124-5438406d4b4f","Type":"ContainerDied","Data":"ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6"} Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.902751 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxvkt\" (UniqueName: \"kubernetes.io/projected/f4b309e7-82c2-4d00-9f80-ff4789ddd307-kube-api-access-kxvkt\") pod \"dnsmasq-dns-6578955fd5-gjj5v\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.933340 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.988211 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-scripts\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.988290 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-config-data\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.988389 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhnvg\" (UniqueName: \"kubernetes.io/projected/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-kube-api-access-fhnvg\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.988454 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.988518 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.988594 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-logs\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:30 crc kubenswrapper[4711]: I1202 10:33:30.988649 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-config-data-custom\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.090132 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-scripts\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.090481 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-config-data\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.090548 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhnvg\" (UniqueName: \"kubernetes.io/projected/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-kube-api-access-fhnvg\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.090576 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.090617 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.090671 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-logs\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.090717 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-config-data-custom\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.092421 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-logs\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.092487 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.097296 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.099865 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-config-data\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.100068 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-scripts\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.112109 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.137603 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhnvg\" (UniqueName: \"kubernetes.io/projected/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-kube-api-access-fhnvg\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.142177 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-config-data-custom\") pod \"cinder-api-0\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.181034 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2a80451-d670-436c-9da0-20e3aec8e2ad" path="/var/lib/kubelet/pods/c2a80451-d670-436c-9da0-20e3aec8e2ad/volumes" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.202386 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.307589 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:33:31 crc kubenswrapper[4711]: W1202 10:33:31.425453 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec53b7f7_0052_423b_b886_82861ff0d7fe.slice/crio-db54f6354fa0a33c715ac50a8afec2151fdad23571f0479425c978769f51e853 WatchSource:0}: Error finding container db54f6354fa0a33c715ac50a8afec2151fdad23571f0479425c978769f51e853: Status 404 returned error can't find the container with id db54f6354fa0a33c715ac50a8afec2151fdad23571f0479425c978769f51e853 Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.547900 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gjj5v"] Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.780114 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.940994 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002","Type":"ContainerStarted","Data":"e2fb0eb6d4fac155449fa1ce7b7d7e90fdee9047c0eb6cd4ecb03ebae11aa375"} Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.951052 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec53b7f7-0052-423b-b886-82861ff0d7fe","Type":"ContainerStarted","Data":"db54f6354fa0a33c715ac50a8afec2151fdad23571f0479425c978769f51e853"} Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.963709 4711 generic.go:334] "Generic (PLEG): container finished" podID="e2d7602c-dae1-4110-b8db-aa51a0761754" containerID="f4ff8ab094e9c8e8e92082c13500bdddc1d241cdf4952cc6b2a062016c5737c6" exitCode=137 Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.963736 4711 generic.go:334] "Generic (PLEG): container finished" podID="e2d7602c-dae1-4110-b8db-aa51a0761754" containerID="fefb51e0fa93ac740dbe66a6ebc38cd2d7b69807a2a926149dacefe8912ebeba" exitCode=137 Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.963771 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7855f9b6bf-w42l8" event={"ID":"e2d7602c-dae1-4110-b8db-aa51a0761754","Type":"ContainerDied","Data":"f4ff8ab094e9c8e8e92082c13500bdddc1d241cdf4952cc6b2a062016c5737c6"} Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.963796 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7855f9b6bf-w42l8" event={"ID":"e2d7602c-dae1-4110-b8db-aa51a0761754","Type":"ContainerDied","Data":"fefb51e0fa93ac740dbe66a6ebc38cd2d7b69807a2a926149dacefe8912ebeba"} Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.971520 4711 generic.go:334] "Generic (PLEG): container finished" podID="f4b309e7-82c2-4d00-9f80-ff4789ddd307" containerID="3fb5dc5d29faff5845344700b5e1b2d2031d93c525296829d4a377dd8a837184" exitCode=0 Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.971555 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" event={"ID":"f4b309e7-82c2-4d00-9f80-ff4789ddd307","Type":"ContainerDied","Data":"3fb5dc5d29faff5845344700b5e1b2d2031d93c525296829d4a377dd8a837184"} Dec 02 10:33:31 crc kubenswrapper[4711]: I1202 10:33:31.971577 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" event={"ID":"f4b309e7-82c2-4d00-9f80-ff4789ddd307","Type":"ContainerStarted","Data":"fcbb83faf96da0e9362a6a76fc0279f85674d614fb9db448095364e21bf949fb"} Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.011363 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.246363 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.324441 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d7602c-dae1-4110-b8db-aa51a0761754-logs\") pod \"e2d7602c-dae1-4110-b8db-aa51a0761754\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.324503 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2d7602c-dae1-4110-b8db-aa51a0761754-scripts\") pod \"e2d7602c-dae1-4110-b8db-aa51a0761754\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.324571 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2d7602c-dae1-4110-b8db-aa51a0761754-config-data\") pod \"e2d7602c-dae1-4110-b8db-aa51a0761754\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.324598 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e2d7602c-dae1-4110-b8db-aa51a0761754-horizon-secret-key\") pod \"e2d7602c-dae1-4110-b8db-aa51a0761754\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.324672 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5rrj\" (UniqueName: \"kubernetes.io/projected/e2d7602c-dae1-4110-b8db-aa51a0761754-kube-api-access-m5rrj\") pod \"e2d7602c-dae1-4110-b8db-aa51a0761754\" (UID: \"e2d7602c-dae1-4110-b8db-aa51a0761754\") " Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.332478 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d7602c-dae1-4110-b8db-aa51a0761754-kube-api-access-m5rrj" (OuterVolumeSpecName: "kube-api-access-m5rrj") pod "e2d7602c-dae1-4110-b8db-aa51a0761754" (UID: "e2d7602c-dae1-4110-b8db-aa51a0761754"). InnerVolumeSpecName "kube-api-access-m5rrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.338155 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d7602c-dae1-4110-b8db-aa51a0761754-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e2d7602c-dae1-4110-b8db-aa51a0761754" (UID: "e2d7602c-dae1-4110-b8db-aa51a0761754"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.339872 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d7602c-dae1-4110-b8db-aa51a0761754-logs" (OuterVolumeSpecName: "logs") pod "e2d7602c-dae1-4110-b8db-aa51a0761754" (UID: "e2d7602c-dae1-4110-b8db-aa51a0761754"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.366191 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d7602c-dae1-4110-b8db-aa51a0761754-config-data" (OuterVolumeSpecName: "config-data") pod "e2d7602c-dae1-4110-b8db-aa51a0761754" (UID: "e2d7602c-dae1-4110-b8db-aa51a0761754"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.396506 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d7602c-dae1-4110-b8db-aa51a0761754-scripts" (OuterVolumeSpecName: "scripts") pod "e2d7602c-dae1-4110-b8db-aa51a0761754" (UID: "e2d7602c-dae1-4110-b8db-aa51a0761754"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.407239 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.427926 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d7602c-dae1-4110-b8db-aa51a0761754-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.427976 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2d7602c-dae1-4110-b8db-aa51a0761754-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.427989 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2d7602c-dae1-4110-b8db-aa51a0761754-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.428000 4711 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e2d7602c-dae1-4110-b8db-aa51a0761754-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.428012 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5rrj\" (UniqueName: \"kubernetes.io/projected/e2d7602c-dae1-4110-b8db-aa51a0761754-kube-api-access-m5rrj\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.530157 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98dbf68a-a027-4b09-a124-5438406d4b4f-run-httpd\") pod \"98dbf68a-a027-4b09-a124-5438406d4b4f\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.530221 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-config-data\") pod \"98dbf68a-a027-4b09-a124-5438406d4b4f\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.530390 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-sg-core-conf-yaml\") pod \"98dbf68a-a027-4b09-a124-5438406d4b4f\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.530504 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98dbf68a-a027-4b09-a124-5438406d4b4f-log-httpd\") pod \"98dbf68a-a027-4b09-a124-5438406d4b4f\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.530568 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-combined-ca-bundle\") pod \"98dbf68a-a027-4b09-a124-5438406d4b4f\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.533452 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2krl\" (UniqueName: \"kubernetes.io/projected/98dbf68a-a027-4b09-a124-5438406d4b4f-kube-api-access-h2krl\") pod \"98dbf68a-a027-4b09-a124-5438406d4b4f\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.533520 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-scripts\") pod \"98dbf68a-a027-4b09-a124-5438406d4b4f\" (UID: \"98dbf68a-a027-4b09-a124-5438406d4b4f\") " Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.533723 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98dbf68a-a027-4b09-a124-5438406d4b4f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "98dbf68a-a027-4b09-a124-5438406d4b4f" (UID: "98dbf68a-a027-4b09-a124-5438406d4b4f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.535198 4711 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98dbf68a-a027-4b09-a124-5438406d4b4f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.535355 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98dbf68a-a027-4b09-a124-5438406d4b4f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "98dbf68a-a027-4b09-a124-5438406d4b4f" (UID: "98dbf68a-a027-4b09-a124-5438406d4b4f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.545421 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98dbf68a-a027-4b09-a124-5438406d4b4f-kube-api-access-h2krl" (OuterVolumeSpecName: "kube-api-access-h2krl") pod "98dbf68a-a027-4b09-a124-5438406d4b4f" (UID: "98dbf68a-a027-4b09-a124-5438406d4b4f"). InnerVolumeSpecName "kube-api-access-h2krl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.557678 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-scripts" (OuterVolumeSpecName: "scripts") pod "98dbf68a-a027-4b09-a124-5438406d4b4f" (UID: "98dbf68a-a027-4b09-a124-5438406d4b4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.637784 4711 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98dbf68a-a027-4b09-a124-5438406d4b4f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.637821 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2krl\" (UniqueName: \"kubernetes.io/projected/98dbf68a-a027-4b09-a124-5438406d4b4f-kube-api-access-h2krl\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.637833 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.643504 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "98dbf68a-a027-4b09-a124-5438406d4b4f" (UID: "98dbf68a-a027-4b09-a124-5438406d4b4f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.667419 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-config-data" (OuterVolumeSpecName: "config-data") pod "98dbf68a-a027-4b09-a124-5438406d4b4f" (UID: "98dbf68a-a027-4b09-a124-5438406d4b4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.680389 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98dbf68a-a027-4b09-a124-5438406d4b4f" (UID: "98dbf68a-a027-4b09-a124-5438406d4b4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.746199 4711 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.746225 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.746234 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98dbf68a-a027-4b09-a124-5438406d4b4f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.838320 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.855257 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fddf747c8-8wktl" Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.968910 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7759cb7c46-754xb"] Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.969197 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7759cb7c46-754xb" podUID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerName="barbican-api-log" containerID="cri-o://fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd" gracePeriod=30 Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.969816 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7759cb7c46-754xb" podUID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerName="barbican-api" containerID="cri-o://0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7" gracePeriod=30 Dec 02 10:33:32 crc kubenswrapper[4711]: I1202 10:33:32.984613 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7759cb7c46-754xb" podUID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": EOF" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.023381 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7855f9b6bf-w42l8" event={"ID":"e2d7602c-dae1-4110-b8db-aa51a0761754","Type":"ContainerDied","Data":"edd59065c7404e86bbd95cde40799367953fee0723f180ee466b06faa87a95f5"} Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.023433 4711 scope.go:117] "RemoveContainer" containerID="f4ff8ab094e9c8e8e92082c13500bdddc1d241cdf4952cc6b2a062016c5737c6" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.023551 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7855f9b6bf-w42l8" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.080264 4711 generic.go:334] "Generic (PLEG): container finished" podID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerID="4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff" exitCode=0 Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.080596 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.160612 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98dbf68a-a027-4b09-a124-5438406d4b4f","Type":"ContainerDied","Data":"4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff"} Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.160653 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98dbf68a-a027-4b09-a124-5438406d4b4f","Type":"ContainerDied","Data":"964192f9500ff93cf7b375a2675820bea29178eeb891ebb6f56b17ca398d56a6"} Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.160666 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002","Type":"ContainerStarted","Data":"26a915943de6de8f0998a9b286479b346761b7980b3e14730bd1b0ebfb6e8e71"} Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.192325 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" event={"ID":"f4b309e7-82c2-4d00-9f80-ff4789ddd307","Type":"ContainerStarted","Data":"ffb9c6b35897cd1dea0632fd5cfde3deb118a137560621fd694bfd40bd7c7a0b"} Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.192368 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.215018 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7855f9b6bf-w42l8"] Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.251131 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.252684 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.262522 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7855f9b6bf-w42l8"] Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.299487 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.306899 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.312568 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" podStartSLOduration=3.312546667 podStartE2EDuration="3.312546667s" podCreationTimestamp="2025-12-02 10:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:33.258616921 +0000 UTC m=+1202.967983388" watchObservedRunningTime="2025-12-02 10:33:33.312546667 +0000 UTC m=+1203.021913124" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.321318 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:33:33 crc kubenswrapper[4711]: E1202 10:33:33.322015 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d7602c-dae1-4110-b8db-aa51a0761754" containerName="horizon" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.322068 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d7602c-dae1-4110-b8db-aa51a0761754" containerName="horizon" Dec 02 10:33:33 crc kubenswrapper[4711]: E1202 10:33:33.322079 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerName="ceilometer-notification-agent" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.322085 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerName="ceilometer-notification-agent" Dec 02 10:33:33 crc kubenswrapper[4711]: E1202 10:33:33.322093 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerName="sg-core" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.322098 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerName="sg-core" Dec 02 10:33:33 crc kubenswrapper[4711]: E1202 10:33:33.322125 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerName="proxy-httpd" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.322132 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerName="proxy-httpd" Dec 02 10:33:33 crc kubenswrapper[4711]: E1202 10:33:33.322166 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d7602c-dae1-4110-b8db-aa51a0761754" containerName="horizon-log" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.322171 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d7602c-dae1-4110-b8db-aa51a0761754" containerName="horizon-log" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.322395 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerName="proxy-httpd" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.322421 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d7602c-dae1-4110-b8db-aa51a0761754" containerName="horizon-log" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.322450 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d7602c-dae1-4110-b8db-aa51a0761754" containerName="horizon" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.322464 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerName="sg-core" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.322471 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" containerName="ceilometer-notification-agent" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.324586 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.328421 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.328623 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.330490 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.349253 4711 scope.go:117] "RemoveContainer" containerID="fefb51e0fa93ac740dbe66a6ebc38cd2d7b69807a2a926149dacefe8912ebeba" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.420056 4711 scope.go:117] "RemoveContainer" containerID="0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.473472 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1413d8be-89a1-43be-a1b6-b8072da4af1b-log-httpd\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.473566 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.473603 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1413d8be-89a1-43be-a1b6-b8072da4af1b-run-httpd\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.473632 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.473694 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-config-data\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.473815 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlcjz\" (UniqueName: \"kubernetes.io/projected/1413d8be-89a1-43be-a1b6-b8072da4af1b-kube-api-access-hlcjz\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.473870 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-scripts\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.491794 4711 scope.go:117] "RemoveContainer" containerID="ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.524106 4711 scope.go:117] "RemoveContainer" containerID="4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.556200 4711 scope.go:117] "RemoveContainer" containerID="0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45" Dec 02 10:33:33 crc kubenswrapper[4711]: E1202 10:33:33.556866 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45\": container with ID starting with 0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45 not found: ID does not exist" containerID="0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.556927 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45"} err="failed to get container status \"0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45\": rpc error: code = NotFound desc = could not find container \"0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45\": container with ID starting with 0030c0c5ec6380c87a6294a222d17cc308d3f52f62b799ef361ee7726a632b45 not found: ID does not exist" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.556974 4711 scope.go:117] "RemoveContainer" containerID="ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6" Dec 02 10:33:33 crc kubenswrapper[4711]: E1202 10:33:33.569321 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6\": container with ID starting with ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6 not found: ID does not exist" containerID="ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.569385 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6"} err="failed to get container status \"ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6\": rpc error: code = NotFound desc = could not find container \"ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6\": container with ID starting with ceaf8690b037fd051859ddd8ef8c9574821641d2711cc91c16cbb1d7def1f7b6 not found: ID does not exist" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.569423 4711 scope.go:117] "RemoveContainer" containerID="4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff" Dec 02 10:33:33 crc kubenswrapper[4711]: E1202 10:33:33.573212 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff\": container with ID starting with 4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff not found: ID does not exist" containerID="4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.573283 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff"} err="failed to get container status \"4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff\": rpc error: code = NotFound desc = could not find container \"4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff\": container with ID starting with 4782bf4fa9f749acb07b4baa468a0117462b37c71235b5553c0769be1d14cdff not found: ID does not exist" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.575310 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1413d8be-89a1-43be-a1b6-b8072da4af1b-log-httpd\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.575371 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.575405 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1413d8be-89a1-43be-a1b6-b8072da4af1b-run-httpd\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.575447 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.575672 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-config-data\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.576684 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlcjz\" (UniqueName: \"kubernetes.io/projected/1413d8be-89a1-43be-a1b6-b8072da4af1b-kube-api-access-hlcjz\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.576729 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-scripts\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.576116 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1413d8be-89a1-43be-a1b6-b8072da4af1b-run-httpd\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.576064 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1413d8be-89a1-43be-a1b6-b8072da4af1b-log-httpd\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.582155 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.585930 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-scripts\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.587919 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-config-data\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.593928 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.599849 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlcjz\" (UniqueName: \"kubernetes.io/projected/1413d8be-89a1-43be-a1b6-b8072da4af1b-kube-api-access-hlcjz\") pod \"ceilometer-0\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " pod="openstack/ceilometer-0" Dec 02 10:33:33 crc kubenswrapper[4711]: I1202 10:33:33.655774 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:33:34 crc kubenswrapper[4711]: I1202 10:33:34.187372 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:33:34 crc kubenswrapper[4711]: I1202 10:33:34.246218 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec53b7f7-0052-423b-b886-82861ff0d7fe","Type":"ContainerStarted","Data":"50ad3fcc3190aeafd15a08e8175caac18d43d29acc090664a74094b03d61e267"} Dec 02 10:33:34 crc kubenswrapper[4711]: I1202 10:33:34.252100 4711 generic.go:334] "Generic (PLEG): container finished" podID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerID="fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd" exitCode=143 Dec 02 10:33:34 crc kubenswrapper[4711]: I1202 10:33:34.252320 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7759cb7c46-754xb" event={"ID":"db67a475-c964-4dd8-b9e0-2c5d198b7a3c","Type":"ContainerDied","Data":"fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd"} Dec 02 10:33:34 crc kubenswrapper[4711]: I1202 10:33:34.256980 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" containerName="cinder-api-log" containerID="cri-o://26a915943de6de8f0998a9b286479b346761b7980b3e14730bd1b0ebfb6e8e71" gracePeriod=30 Dec 02 10:33:34 crc kubenswrapper[4711]: I1202 10:33:34.257635 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002","Type":"ContainerStarted","Data":"7e92ccb542e63746ea48f09a05fb085aef1bc41d3be4217787bd79f974e9f156"} Dec 02 10:33:34 crc kubenswrapper[4711]: I1202 10:33:34.257850 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 02 10:33:34 crc kubenswrapper[4711]: I1202 10:33:34.258314 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" containerName="cinder-api" containerID="cri-o://7e92ccb542e63746ea48f09a05fb085aef1bc41d3be4217787bd79f974e9f156" gracePeriod=30 Dec 02 10:33:34 crc kubenswrapper[4711]: I1202 10:33:34.291798 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.291780114 podStartE2EDuration="4.291780114s" podCreationTimestamp="2025-12-02 10:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:34.288637048 +0000 UTC m=+1203.998003495" watchObservedRunningTime="2025-12-02 10:33:34.291780114 +0000 UTC m=+1204.001146561" Dec 02 10:33:34 crc kubenswrapper[4711]: I1202 10:33:34.645331 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.089172 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98dbf68a-a027-4b09-a124-5438406d4b4f" path="/var/lib/kubelet/pods/98dbf68a-a027-4b09-a124-5438406d4b4f/volumes" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.090214 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d7602c-dae1-4110-b8db-aa51a0761754" path="/var/lib/kubelet/pods/e2d7602c-dae1-4110-b8db-aa51a0761754/volumes" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.275310 4711 generic.go:334] "Generic (PLEG): container finished" podID="ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" containerID="7e92ccb542e63746ea48f09a05fb085aef1bc41d3be4217787bd79f974e9f156" exitCode=0 Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.275336 4711 generic.go:334] "Generic (PLEG): container finished" podID="ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" containerID="26a915943de6de8f0998a9b286479b346761b7980b3e14730bd1b0ebfb6e8e71" exitCode=143 Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.275383 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002","Type":"ContainerDied","Data":"7e92ccb542e63746ea48f09a05fb085aef1bc41d3be4217787bd79f974e9f156"} Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.275409 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002","Type":"ContainerDied","Data":"26a915943de6de8f0998a9b286479b346761b7980b3e14730bd1b0ebfb6e8e71"} Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.281897 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec53b7f7-0052-423b-b886-82861ff0d7fe","Type":"ContainerStarted","Data":"05a43d1badc2458a044e8ee9b36da175b623d18c76ddc433e87c3aea77edade1"} Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.283725 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1413d8be-89a1-43be-a1b6-b8072da4af1b","Type":"ContainerStarted","Data":"1a0e91b3cb86c6f0ea05823c83b5ce7d09dc668287464384349b7897b95ac559"} Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.309907 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.077862199 podStartE2EDuration="5.309881677s" podCreationTimestamp="2025-12-02 10:33:30 +0000 UTC" firstStartedPulling="2025-12-02 10:33:31.428373643 +0000 UTC m=+1201.137740090" lastFinishedPulling="2025-12-02 10:33:32.660393121 +0000 UTC m=+1202.369759568" observedRunningTime="2025-12-02 10:33:35.304726416 +0000 UTC m=+1205.014092863" watchObservedRunningTime="2025-12-02 10:33:35.309881677 +0000 UTC m=+1205.019248124" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.494331 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.619715 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-scripts\") pod \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.619794 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhnvg\" (UniqueName: \"kubernetes.io/projected/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-kube-api-access-fhnvg\") pod \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.619862 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-combined-ca-bundle\") pod \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.619992 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-etc-machine-id\") pod \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.620125 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-logs\") pod \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.620173 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-config-data-custom\") pod \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.620223 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-config-data\") pod \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\" (UID: \"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002\") " Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.620266 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" (UID: "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.620708 4711 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.620719 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-logs" (OuterVolumeSpecName: "logs") pod "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" (UID: "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.626224 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-kube-api-access-fhnvg" (OuterVolumeSpecName: "kube-api-access-fhnvg") pod "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" (UID: "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002"). InnerVolumeSpecName "kube-api-access-fhnvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.626326 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" (UID: "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.626413 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-scripts" (OuterVolumeSpecName: "scripts") pod "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" (UID: "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.656144 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" (UID: "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.723048 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhnvg\" (UniqueName: \"kubernetes.io/projected/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-kube-api-access-fhnvg\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.723082 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.723091 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.723100 4711 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.723126 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.728081 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-config-data" (OuterVolumeSpecName: "config-data") pod "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" (UID: "ffa3b2c6-2bfa-41c2-8f63-f0628f59a002"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.801684 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.824605 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.984167 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:33:35 crc kubenswrapper[4711]: I1202 10:33:35.988722 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6b4d9565bd-5nwjn" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.065140 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76998c6f5b-xhr78"] Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.295653 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1413d8be-89a1-43be-a1b6-b8072da4af1b","Type":"ContainerStarted","Data":"b64cbc1d219a1320cf549b465a4bc6f8442511eb0eec73d9813988e5b1613852"} Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.295918 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1413d8be-89a1-43be-a1b6-b8072da4af1b","Type":"ContainerStarted","Data":"1248f05c73ce0243ea7bed4336b7491f7cea3adf2fd9fdaf8d829dd99ddc2f4f"} Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.297691 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffa3b2c6-2bfa-41c2-8f63-f0628f59a002","Type":"ContainerDied","Data":"e2fb0eb6d4fac155449fa1ce7b7d7e90fdee9047c0eb6cd4ecb03ebae11aa375"} Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.297747 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.297820 4711 scope.go:117] "RemoveContainer" containerID="7e92ccb542e63746ea48f09a05fb085aef1bc41d3be4217787bd79f974e9f156" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.298013 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76998c6f5b-xhr78" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerName="horizon-log" containerID="cri-o://fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571" gracePeriod=30 Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.298144 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76998c6f5b-xhr78" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerName="horizon" containerID="cri-o://889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd" gracePeriod=30 Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.326753 4711 scope.go:117] "RemoveContainer" containerID="26a915943de6de8f0998a9b286479b346761b7980b3e14730bd1b0ebfb6e8e71" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.332134 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.342028 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.387017 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:33:36 crc kubenswrapper[4711]: E1202 10:33:36.387488 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" containerName="cinder-api-log" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.387513 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" containerName="cinder-api-log" Dec 02 10:33:36 crc kubenswrapper[4711]: E1202 10:33:36.387535 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" containerName="cinder-api" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.387544 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" containerName="cinder-api" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.387766 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" containerName="cinder-api" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.387792 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" containerName="cinder-api-log" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.388998 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.391290 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.392192 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.392311 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.399216 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.539216 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.539293 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1652776-ac6b-4033-a6b0-e0272ce72b34-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.539381 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.539419 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1652776-ac6b-4033-a6b0-e0272ce72b34-logs\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.539463 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-config-data-custom\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.539507 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-scripts\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.539566 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4dgz\" (UniqueName: \"kubernetes.io/projected/d1652776-ac6b-4033-a6b0-e0272ce72b34-kube-api-access-f4dgz\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.539676 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-config-data\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.539727 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.647718 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-config-data\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.647860 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.648035 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.648106 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1652776-ac6b-4033-a6b0-e0272ce72b34-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.648250 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.648301 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1652776-ac6b-4033-a6b0-e0272ce72b34-logs\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.648319 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1652776-ac6b-4033-a6b0-e0272ce72b34-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.648370 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-config-data-custom\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.648433 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-scripts\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.648536 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4dgz\" (UniqueName: \"kubernetes.io/projected/d1652776-ac6b-4033-a6b0-e0272ce72b34-kube-api-access-f4dgz\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.649793 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1652776-ac6b-4033-a6b0-e0272ce72b34-logs\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.653809 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.660572 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.661544 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.664112 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-config-data\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.672750 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-config-data-custom\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.672901 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1652776-ac6b-4033-a6b0-e0272ce72b34-scripts\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.678468 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4dgz\" (UniqueName: \"kubernetes.io/projected/d1652776-ac6b-4033-a6b0-e0272ce72b34-kube-api-access-f4dgz\") pod \"cinder-api-0\" (UID: \"d1652776-ac6b-4033-a6b0-e0272ce72b34\") " pod="openstack/cinder-api-0" Dec 02 10:33:36 crc kubenswrapper[4711]: I1202 10:33:36.713095 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 10:33:37 crc kubenswrapper[4711]: I1202 10:33:37.089478 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffa3b2c6-2bfa-41c2-8f63-f0628f59a002" path="/var/lib/kubelet/pods/ffa3b2c6-2bfa-41c2-8f63-f0628f59a002/volumes" Dec 02 10:33:37 crc kubenswrapper[4711]: I1202 10:33:37.260068 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:33:37 crc kubenswrapper[4711]: I1202 10:33:37.312451 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b445d9db9-64xt2" Dec 02 10:33:37 crc kubenswrapper[4711]: I1202 10:33:37.337265 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1413d8be-89a1-43be-a1b6-b8072da4af1b","Type":"ContainerStarted","Data":"a0291ad592b5bc9d091c1ec1c0d08c1c41448703242d09064e003f77bf954c3a"} Dec 02 10:33:37 crc kubenswrapper[4711]: I1202 10:33:37.349589 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d1652776-ac6b-4033-a6b0-e0272ce72b34","Type":"ContainerStarted","Data":"1e4f5b589f103376cf67de5fe427bcf7ee5b18b057369fd82912a73faca35d67"} Dec 02 10:33:37 crc kubenswrapper[4711]: I1202 10:33:37.427667 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c7694fb9b-29wkn"] Dec 02 10:33:37 crc kubenswrapper[4711]: I1202 10:33:37.428089 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c7694fb9b-29wkn" podUID="87082f2a-7ab6-44ad-95e0-226a0b7b416d" containerName="neutron-api" containerID="cri-o://65719ca8b2cd671cf2d07b24c7c6b6545e86c7b7ba2e9697aa489beb7f8b36d2" gracePeriod=30 Dec 02 10:33:37 crc kubenswrapper[4711]: I1202 10:33:37.428455 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c7694fb9b-29wkn" podUID="87082f2a-7ab6-44ad-95e0-226a0b7b416d" containerName="neutron-httpd" containerID="cri-o://a4614fb25fe8ccc103806abab23ae35b75858afdde026a38f1440114e1ab7712" gracePeriod=30 Dec 02 10:33:37 crc kubenswrapper[4711]: I1202 10:33:37.627284 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7759cb7c46-754xb" podUID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:54344->10.217.0.161:9311: read: connection reset by peer" Dec 02 10:33:37 crc kubenswrapper[4711]: I1202 10:33:37.627353 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7759cb7c46-754xb" podUID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:54348->10.217.0.161:9311: read: connection reset by peer" Dec 02 10:33:37 crc kubenswrapper[4711]: I1202 10:33:37.951962 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.099920 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdggq\" (UniqueName: \"kubernetes.io/projected/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-kube-api-access-jdggq\") pod \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.100390 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-logs\") pod \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.100454 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-config-data-custom\") pod \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.100491 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-combined-ca-bundle\") pod \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.100685 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-config-data\") pod \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\" (UID: \"db67a475-c964-4dd8-b9e0-2c5d198b7a3c\") " Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.100830 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-logs" (OuterVolumeSpecName: "logs") pod "db67a475-c964-4dd8-b9e0-2c5d198b7a3c" (UID: "db67a475-c964-4dd8-b9e0-2c5d198b7a3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.101152 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.107707 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "db67a475-c964-4dd8-b9e0-2c5d198b7a3c" (UID: "db67a475-c964-4dd8-b9e0-2c5d198b7a3c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.107787 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-kube-api-access-jdggq" (OuterVolumeSpecName: "kube-api-access-jdggq") pod "db67a475-c964-4dd8-b9e0-2c5d198b7a3c" (UID: "db67a475-c964-4dd8-b9e0-2c5d198b7a3c"). InnerVolumeSpecName "kube-api-access-jdggq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.142828 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db67a475-c964-4dd8-b9e0-2c5d198b7a3c" (UID: "db67a475-c964-4dd8-b9e0-2c5d198b7a3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.170303 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-config-data" (OuterVolumeSpecName: "config-data") pod "db67a475-c964-4dd8-b9e0-2c5d198b7a3c" (UID: "db67a475-c964-4dd8-b9e0-2c5d198b7a3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.206940 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.206991 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdggq\" (UniqueName: \"kubernetes.io/projected/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-kube-api-access-jdggq\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.207003 4711 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.207013 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db67a475-c964-4dd8-b9e0-2c5d198b7a3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.378177 4711 generic.go:334] "Generic (PLEG): container finished" podID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerID="0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7" exitCode=0 Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.378268 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7759cb7c46-754xb" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.378312 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7759cb7c46-754xb" event={"ID":"db67a475-c964-4dd8-b9e0-2c5d198b7a3c","Type":"ContainerDied","Data":"0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7"} Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.378363 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7759cb7c46-754xb" event={"ID":"db67a475-c964-4dd8-b9e0-2c5d198b7a3c","Type":"ContainerDied","Data":"042e457ccc7d95b94f72de1bd0f318e7f54f8288a7ea6237aa3dcac440dfbf2b"} Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.378387 4711 scope.go:117] "RemoveContainer" containerID="0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.384334 4711 generic.go:334] "Generic (PLEG): container finished" podID="87082f2a-7ab6-44ad-95e0-226a0b7b416d" containerID="a4614fb25fe8ccc103806abab23ae35b75858afdde026a38f1440114e1ab7712" exitCode=0 Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.384401 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c7694fb9b-29wkn" event={"ID":"87082f2a-7ab6-44ad-95e0-226a0b7b416d","Type":"ContainerDied","Data":"a4614fb25fe8ccc103806abab23ae35b75858afdde026a38f1440114e1ab7712"} Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.387160 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d1652776-ac6b-4033-a6b0-e0272ce72b34","Type":"ContainerStarted","Data":"3543093d328df8d1285de25a11d17e242a0866ffb6e4eb40fdf721bcafc35733"} Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.407153 4711 scope.go:117] "RemoveContainer" containerID="fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.425210 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7759cb7c46-754xb"] Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.430677 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7759cb7c46-754xb"] Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.435580 4711 scope.go:117] "RemoveContainer" containerID="0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7" Dec 02 10:33:38 crc kubenswrapper[4711]: E1202 10:33:38.436081 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7\": container with ID starting with 0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7 not found: ID does not exist" containerID="0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.436131 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7"} err="failed to get container status \"0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7\": rpc error: code = NotFound desc = could not find container \"0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7\": container with ID starting with 0636b82dae68379484c2ca796fd0fbfa901e7e95c2eb0de4c408021f73d3ede7 not found: ID does not exist" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.436165 4711 scope.go:117] "RemoveContainer" containerID="fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd" Dec 02 10:33:38 crc kubenswrapper[4711]: E1202 10:33:38.440226 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd\": container with ID starting with fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd not found: ID does not exist" containerID="fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd" Dec 02 10:33:38 crc kubenswrapper[4711]: I1202 10:33:38.440264 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd"} err="failed to get container status \"fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd\": rpc error: code = NotFound desc = could not find container \"fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd\": container with ID starting with fc6472e182fd1c8a4e68121dce023f6d6d0224da3d371033bd2fa4fcda1879bd not found: ID does not exist" Dec 02 10:33:39 crc kubenswrapper[4711]: I1202 10:33:39.093650 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" path="/var/lib/kubelet/pods/db67a475-c964-4dd8-b9e0-2c5d198b7a3c/volumes" Dec 02 10:33:39 crc kubenswrapper[4711]: I1202 10:33:39.416065 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1413d8be-89a1-43be-a1b6-b8072da4af1b","Type":"ContainerStarted","Data":"1008b4a91161580b41c6d778c0398668db98dc3893145ba493d576ca94acc79a"} Dec 02 10:33:39 crc kubenswrapper[4711]: I1202 10:33:39.416190 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:33:39 crc kubenswrapper[4711]: I1202 10:33:39.424183 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d1652776-ac6b-4033-a6b0-e0272ce72b34","Type":"ContainerStarted","Data":"fa336e374f3ea22e2209c965f2d5a12ea79980714b91b68e429b2491b6572a51"} Dec 02 10:33:39 crc kubenswrapper[4711]: I1202 10:33:39.424350 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 02 10:33:39 crc kubenswrapper[4711]: I1202 10:33:39.448704 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.16505623 podStartE2EDuration="6.448678683s" podCreationTimestamp="2025-12-02 10:33:33 +0000 UTC" firstStartedPulling="2025-12-02 10:33:34.235579706 +0000 UTC m=+1203.944946153" lastFinishedPulling="2025-12-02 10:33:38.519202159 +0000 UTC m=+1208.228568606" observedRunningTime="2025-12-02 10:33:39.442475664 +0000 UTC m=+1209.151842111" watchObservedRunningTime="2025-12-02 10:33:39.448678683 +0000 UTC m=+1209.158045130" Dec 02 10:33:39 crc kubenswrapper[4711]: I1202 10:33:39.490570 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.4905484319999998 podStartE2EDuration="3.490548432s" podCreationTimestamp="2025-12-02 10:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:39.472913692 +0000 UTC m=+1209.182280219" watchObservedRunningTime="2025-12-02 10:33:39.490548432 +0000 UTC m=+1209.199914879" Dec 02 10:33:39 crc kubenswrapper[4711]: I1202 10:33:39.757100 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-76998c6f5b-xhr78" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 02 10:33:40 crc kubenswrapper[4711]: I1202 10:33:40.437075 4711 generic.go:334] "Generic (PLEG): container finished" podID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerID="889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd" exitCode=0 Dec 02 10:33:40 crc kubenswrapper[4711]: I1202 10:33:40.438232 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76998c6f5b-xhr78" event={"ID":"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd","Type":"ContainerDied","Data":"889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd"} Dec 02 10:33:40 crc kubenswrapper[4711]: I1202 10:33:40.935131 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.031374 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pmlwz"] Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.031597 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" podUID="8008b774-a8f4-426e-b0f8-7073e53c9fae" containerName="dnsmasq-dns" containerID="cri-o://944f90ec01056ec389a89e9f186c94b0482a5f9fb83134e0a0b872e7a4f9f32a" gracePeriod=10 Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.076059 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.156344 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.463548 4711 generic.go:334] "Generic (PLEG): container finished" podID="87082f2a-7ab6-44ad-95e0-226a0b7b416d" containerID="65719ca8b2cd671cf2d07b24c7c6b6545e86c7b7ba2e9697aa489beb7f8b36d2" exitCode=0 Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.463876 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c7694fb9b-29wkn" event={"ID":"87082f2a-7ab6-44ad-95e0-226a0b7b416d","Type":"ContainerDied","Data":"65719ca8b2cd671cf2d07b24c7c6b6545e86c7b7ba2e9697aa489beb7f8b36d2"} Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.466142 4711 generic.go:334] "Generic (PLEG): container finished" podID="8008b774-a8f4-426e-b0f8-7073e53c9fae" containerID="944f90ec01056ec389a89e9f186c94b0482a5f9fb83134e0a0b872e7a4f9f32a" exitCode=0 Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.466241 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" event={"ID":"8008b774-a8f4-426e-b0f8-7073e53c9fae","Type":"ContainerDied","Data":"944f90ec01056ec389a89e9f186c94b0482a5f9fb83134e0a0b872e7a4f9f32a"} Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.466360 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ec53b7f7-0052-423b-b886-82861ff0d7fe" containerName="cinder-scheduler" containerID="cri-o://50ad3fcc3190aeafd15a08e8175caac18d43d29acc090664a74094b03d61e267" gracePeriod=30 Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.466392 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ec53b7f7-0052-423b-b886-82861ff0d7fe" containerName="probe" containerID="cri-o://05a43d1badc2458a044e8ee9b36da175b623d18c76ddc433e87c3aea77edade1" gracePeriod=30 Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.653946 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.803572 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.809320 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-dns-svc\") pod \"8008b774-a8f4-426e-b0f8-7073e53c9fae\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.809375 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7wdq\" (UniqueName: \"kubernetes.io/projected/8008b774-a8f4-426e-b0f8-7073e53c9fae-kube-api-access-w7wdq\") pod \"8008b774-a8f4-426e-b0f8-7073e53c9fae\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.809517 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-dns-swift-storage-0\") pod \"8008b774-a8f4-426e-b0f8-7073e53c9fae\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.809597 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-config\") pod \"8008b774-a8f4-426e-b0f8-7073e53c9fae\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.809684 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-ovsdbserver-sb\") pod \"8008b774-a8f4-426e-b0f8-7073e53c9fae\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.809711 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-ovsdbserver-nb\") pod \"8008b774-a8f4-426e-b0f8-7073e53c9fae\" (UID: \"8008b774-a8f4-426e-b0f8-7073e53c9fae\") " Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.815469 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8008b774-a8f4-426e-b0f8-7073e53c9fae-kube-api-access-w7wdq" (OuterVolumeSpecName: "kube-api-access-w7wdq") pod "8008b774-a8f4-426e-b0f8-7073e53c9fae" (UID: "8008b774-a8f4-426e-b0f8-7073e53c9fae"). InnerVolumeSpecName "kube-api-access-w7wdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.866803 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8008b774-a8f4-426e-b0f8-7073e53c9fae" (UID: "8008b774-a8f4-426e-b0f8-7073e53c9fae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.870044 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8008b774-a8f4-426e-b0f8-7073e53c9fae" (UID: "8008b774-a8f4-426e-b0f8-7073e53c9fae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.870675 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8008b774-a8f4-426e-b0f8-7073e53c9fae" (UID: "8008b774-a8f4-426e-b0f8-7073e53c9fae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.876556 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8008b774-a8f4-426e-b0f8-7073e53c9fae" (UID: "8008b774-a8f4-426e-b0f8-7073e53c9fae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.876863 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-config" (OuterVolumeSpecName: "config") pod "8008b774-a8f4-426e-b0f8-7073e53c9fae" (UID: "8008b774-a8f4-426e-b0f8-7073e53c9fae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.911582 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-httpd-config\") pod \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.911727 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-config\") pod \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.911871 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttkqh\" (UniqueName: \"kubernetes.io/projected/87082f2a-7ab6-44ad-95e0-226a0b7b416d-kube-api-access-ttkqh\") pod \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.911903 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-ovndb-tls-certs\") pod \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.911932 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-combined-ca-bundle\") pod \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\" (UID: \"87082f2a-7ab6-44ad-95e0-226a0b7b416d\") " Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.912381 4711 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.912405 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.912419 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.912431 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.912442 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8008b774-a8f4-426e-b0f8-7073e53c9fae-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.912453 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7wdq\" (UniqueName: \"kubernetes.io/projected/8008b774-a8f4-426e-b0f8-7073e53c9fae-kube-api-access-w7wdq\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.915974 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "87082f2a-7ab6-44ad-95e0-226a0b7b416d" (UID: "87082f2a-7ab6-44ad-95e0-226a0b7b416d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.923701 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87082f2a-7ab6-44ad-95e0-226a0b7b416d-kube-api-access-ttkqh" (OuterVolumeSpecName: "kube-api-access-ttkqh") pod "87082f2a-7ab6-44ad-95e0-226a0b7b416d" (UID: "87082f2a-7ab6-44ad-95e0-226a0b7b416d"). InnerVolumeSpecName "kube-api-access-ttkqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.981447 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87082f2a-7ab6-44ad-95e0-226a0b7b416d" (UID: "87082f2a-7ab6-44ad-95e0-226a0b7b416d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.989894 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "87082f2a-7ab6-44ad-95e0-226a0b7b416d" (UID: "87082f2a-7ab6-44ad-95e0-226a0b7b416d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:41 crc kubenswrapper[4711]: I1202 10:33:41.995642 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-config" (OuterVolumeSpecName: "config") pod "87082f2a-7ab6-44ad-95e0-226a0b7b416d" (UID: "87082f2a-7ab6-44ad-95e0-226a0b7b416d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.014005 4711 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.014056 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.014068 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttkqh\" (UniqueName: \"kubernetes.io/projected/87082f2a-7ab6-44ad-95e0-226a0b7b416d-kube-api-access-ttkqh\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.014081 4711 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.014094 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87082f2a-7ab6-44ad-95e0-226a0b7b416d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.478723 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c7694fb9b-29wkn" event={"ID":"87082f2a-7ab6-44ad-95e0-226a0b7b416d","Type":"ContainerDied","Data":"0d5e287981487b0e11bf1bec9cc5f7d9eace165697d80830569d723149eecf99"} Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.479032 4711 scope.go:117] "RemoveContainer" containerID="a4614fb25fe8ccc103806abab23ae35b75858afdde026a38f1440114e1ab7712" Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.479155 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c7694fb9b-29wkn" Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.486694 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.486697 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pmlwz" event={"ID":"8008b774-a8f4-426e-b0f8-7073e53c9fae","Type":"ContainerDied","Data":"b7433472abeb4218269b4f87a3a84f7bd574a49891badfba7ce34f5e33660b3f"} Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.490023 4711 generic.go:334] "Generic (PLEG): container finished" podID="ec53b7f7-0052-423b-b886-82861ff0d7fe" containerID="05a43d1badc2458a044e8ee9b36da175b623d18c76ddc433e87c3aea77edade1" exitCode=0 Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.490100 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec53b7f7-0052-423b-b886-82861ff0d7fe","Type":"ContainerDied","Data":"05a43d1badc2458a044e8ee9b36da175b623d18c76ddc433e87c3aea77edade1"} Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.533033 4711 scope.go:117] "RemoveContainer" containerID="65719ca8b2cd671cf2d07b24c7c6b6545e86c7b7ba2e9697aa489beb7f8b36d2" Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.558620 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c7694fb9b-29wkn"] Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.569999 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6c7694fb9b-29wkn"] Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.578607 4711 scope.go:117] "RemoveContainer" containerID="944f90ec01056ec389a89e9f186c94b0482a5f9fb83134e0a0b872e7a4f9f32a" Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.585728 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pmlwz"] Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.599081 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pmlwz"] Dec 02 10:33:42 crc kubenswrapper[4711]: I1202 10:33:42.610209 4711 scope.go:117] "RemoveContainer" containerID="d2496a79a6f2069b47d6267ee3428cda5565ecc545beb746d4bf0d8893e4557d" Dec 02 10:33:43 crc kubenswrapper[4711]: I1202 10:33:43.089857 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8008b774-a8f4-426e-b0f8-7073e53c9fae" path="/var/lib/kubelet/pods/8008b774-a8f4-426e-b0f8-7073e53c9fae/volumes" Dec 02 10:33:43 crc kubenswrapper[4711]: I1202 10:33:43.090442 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87082f2a-7ab6-44ad-95e0-226a0b7b416d" path="/var/lib/kubelet/pods/87082f2a-7ab6-44ad-95e0-226a0b7b416d/volumes" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.526686 4711 generic.go:334] "Generic (PLEG): container finished" podID="ec53b7f7-0052-423b-b886-82861ff0d7fe" containerID="50ad3fcc3190aeafd15a08e8175caac18d43d29acc090664a74094b03d61e267" exitCode=0 Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.526762 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec53b7f7-0052-423b-b886-82861ff0d7fe","Type":"ContainerDied","Data":"50ad3fcc3190aeafd15a08e8175caac18d43d29acc090664a74094b03d61e267"} Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.527091 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec53b7f7-0052-423b-b886-82861ff0d7fe","Type":"ContainerDied","Data":"db54f6354fa0a33c715ac50a8afec2151fdad23571f0479425c978769f51e853"} Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.527112 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db54f6354fa0a33c715ac50a8afec2151fdad23571f0479425c978769f51e853" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.560661 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.685372 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb54k\" (UniqueName: \"kubernetes.io/projected/ec53b7f7-0052-423b-b886-82861ff0d7fe-kube-api-access-vb54k\") pod \"ec53b7f7-0052-423b-b886-82861ff0d7fe\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.685484 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-config-data-custom\") pod \"ec53b7f7-0052-423b-b886-82861ff0d7fe\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.685508 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-combined-ca-bundle\") pod \"ec53b7f7-0052-423b-b886-82861ff0d7fe\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.685535 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-config-data\") pod \"ec53b7f7-0052-423b-b886-82861ff0d7fe\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.685563 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec53b7f7-0052-423b-b886-82861ff0d7fe-etc-machine-id\") pod \"ec53b7f7-0052-423b-b886-82861ff0d7fe\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.685610 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-scripts\") pod \"ec53b7f7-0052-423b-b886-82861ff0d7fe\" (UID: \"ec53b7f7-0052-423b-b886-82861ff0d7fe\") " Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.691186 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec53b7f7-0052-423b-b886-82861ff0d7fe-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ec53b7f7-0052-423b-b886-82861ff0d7fe" (UID: "ec53b7f7-0052-423b-b886-82861ff0d7fe"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.691442 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec53b7f7-0052-423b-b886-82861ff0d7fe-kube-api-access-vb54k" (OuterVolumeSpecName: "kube-api-access-vb54k") pod "ec53b7f7-0052-423b-b886-82861ff0d7fe" (UID: "ec53b7f7-0052-423b-b886-82861ff0d7fe"). InnerVolumeSpecName "kube-api-access-vb54k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.693235 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-scripts" (OuterVolumeSpecName: "scripts") pod "ec53b7f7-0052-423b-b886-82861ff0d7fe" (UID: "ec53b7f7-0052-423b-b886-82861ff0d7fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.700111 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ec53b7f7-0052-423b-b886-82861ff0d7fe" (UID: "ec53b7f7-0052-423b-b886-82861ff0d7fe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.738585 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec53b7f7-0052-423b-b886-82861ff0d7fe" (UID: "ec53b7f7-0052-423b-b886-82861ff0d7fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.786736 4711 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec53b7f7-0052-423b-b886-82861ff0d7fe-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.786766 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.786779 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb54k\" (UniqueName: \"kubernetes.io/projected/ec53b7f7-0052-423b-b886-82861ff0d7fe-kube-api-access-vb54k\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.786793 4711 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.786804 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.803658 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-config-data" (OuterVolumeSpecName: "config-data") pod "ec53b7f7-0052-423b-b886-82861ff0d7fe" (UID: "ec53b7f7-0052-423b-b886-82861ff0d7fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:33:45 crc kubenswrapper[4711]: I1202 10:33:45.888277 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53b7f7-0052-423b-b886-82861ff0d7fe-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.536828 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.571230 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.583138 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.603460 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:33:46 crc kubenswrapper[4711]: E1202 10:33:46.603783 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8008b774-a8f4-426e-b0f8-7073e53c9fae" containerName="init" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.603801 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="8008b774-a8f4-426e-b0f8-7073e53c9fae" containerName="init" Dec 02 10:33:46 crc kubenswrapper[4711]: E1202 10:33:46.603813 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec53b7f7-0052-423b-b886-82861ff0d7fe" containerName="probe" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.603819 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec53b7f7-0052-423b-b886-82861ff0d7fe" containerName="probe" Dec 02 10:33:46 crc kubenswrapper[4711]: E1202 10:33:46.603833 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87082f2a-7ab6-44ad-95e0-226a0b7b416d" containerName="neutron-httpd" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.603838 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="87082f2a-7ab6-44ad-95e0-226a0b7b416d" containerName="neutron-httpd" Dec 02 10:33:46 crc kubenswrapper[4711]: E1202 10:33:46.603855 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerName="barbican-api-log" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.603861 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerName="barbican-api-log" Dec 02 10:33:46 crc kubenswrapper[4711]: E1202 10:33:46.603878 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec53b7f7-0052-423b-b886-82861ff0d7fe" containerName="cinder-scheduler" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.603884 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec53b7f7-0052-423b-b886-82861ff0d7fe" containerName="cinder-scheduler" Dec 02 10:33:46 crc kubenswrapper[4711]: E1202 10:33:46.603894 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87082f2a-7ab6-44ad-95e0-226a0b7b416d" containerName="neutron-api" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.603899 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="87082f2a-7ab6-44ad-95e0-226a0b7b416d" containerName="neutron-api" Dec 02 10:33:46 crc kubenswrapper[4711]: E1202 10:33:46.603906 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8008b774-a8f4-426e-b0f8-7073e53c9fae" containerName="dnsmasq-dns" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.603912 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="8008b774-a8f4-426e-b0f8-7073e53c9fae" containerName="dnsmasq-dns" Dec 02 10:33:46 crc kubenswrapper[4711]: E1202 10:33:46.603920 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerName="barbican-api" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.603926 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerName="barbican-api" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.604135 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="8008b774-a8f4-426e-b0f8-7073e53c9fae" containerName="dnsmasq-dns" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.604158 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="87082f2a-7ab6-44ad-95e0-226a0b7b416d" containerName="neutron-api" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.604167 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerName="barbican-api-log" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.604178 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec53b7f7-0052-423b-b886-82861ff0d7fe" containerName="cinder-scheduler" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.604189 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="db67a475-c964-4dd8-b9e0-2c5d198b7a3c" containerName="barbican-api" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.604200 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="87082f2a-7ab6-44ad-95e0-226a0b7b416d" containerName="neutron-httpd" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.604213 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec53b7f7-0052-423b-b886-82861ff0d7fe" containerName="probe" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.605343 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.620582 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.651048 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.803491 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a4b1357-91ed-4ef7-85f5-9b52085ce952-scripts\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.803535 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a4b1357-91ed-4ef7-85f5-9b52085ce952-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.803598 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a4b1357-91ed-4ef7-85f5-9b52085ce952-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.803691 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4b1357-91ed-4ef7-85f5-9b52085ce952-config-data\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.803736 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8m7g\" (UniqueName: \"kubernetes.io/projected/5a4b1357-91ed-4ef7-85f5-9b52085ce952-kube-api-access-j8m7g\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.803811 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4b1357-91ed-4ef7-85f5-9b52085ce952-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.906476 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a4b1357-91ed-4ef7-85f5-9b52085ce952-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.906614 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4b1357-91ed-4ef7-85f5-9b52085ce952-config-data\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.906643 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8m7g\" (UniqueName: \"kubernetes.io/projected/5a4b1357-91ed-4ef7-85f5-9b52085ce952-kube-api-access-j8m7g\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.906671 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4b1357-91ed-4ef7-85f5-9b52085ce952-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.906780 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a4b1357-91ed-4ef7-85f5-9b52085ce952-scripts\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.906825 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a4b1357-91ed-4ef7-85f5-9b52085ce952-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.907356 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a4b1357-91ed-4ef7-85f5-9b52085ce952-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.910610 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a4b1357-91ed-4ef7-85f5-9b52085ce952-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.911235 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a4b1357-91ed-4ef7-85f5-9b52085ce952-scripts\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.911310 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4b1357-91ed-4ef7-85f5-9b52085ce952-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.911680 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4b1357-91ed-4ef7-85f5-9b52085ce952-config-data\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:46 crc kubenswrapper[4711]: I1202 10:33:46.929478 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8m7g\" (UniqueName: \"kubernetes.io/projected/5a4b1357-91ed-4ef7-85f5-9b52085ce952-kube-api-access-j8m7g\") pod \"cinder-scheduler-0\" (UID: \"5a4b1357-91ed-4ef7-85f5-9b52085ce952\") " pod="openstack/cinder-scheduler-0" Dec 02 10:33:47 crc kubenswrapper[4711]: I1202 10:33:47.089280 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec53b7f7-0052-423b-b886-82861ff0d7fe" path="/var/lib/kubelet/pods/ec53b7f7-0052-423b-b886-82861ff0d7fe/volumes" Dec 02 10:33:47 crc kubenswrapper[4711]: I1202 10:33:47.228647 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 10:33:47 crc kubenswrapper[4711]: I1202 10:33:47.658244 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:47 crc kubenswrapper[4711]: I1202 10:33:47.668643 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6545f6547b-92nrg" Dec 02 10:33:47 crc kubenswrapper[4711]: I1202 10:33:47.711189 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:33:48 crc kubenswrapper[4711]: I1202 10:33:48.022807 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6986b467dd-l4plx" Dec 02 10:33:48 crc kubenswrapper[4711]: I1202 10:33:48.574586 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5a4b1357-91ed-4ef7-85f5-9b52085ce952","Type":"ContainerStarted","Data":"d9f5da098d3c6838bdb89b7a3ac677969b74901a7f511c16e7e74a2a1fa43c17"} Dec 02 10:33:48 crc kubenswrapper[4711]: I1202 10:33:48.574987 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5a4b1357-91ed-4ef7-85f5-9b52085ce952","Type":"ContainerStarted","Data":"316b35be482f86e7236cd07e7fe7ba4f5a541156b0a618129dac23f52d1fae0e"} Dec 02 10:33:48 crc kubenswrapper[4711]: I1202 10:33:48.689213 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 02 10:33:49 crc kubenswrapper[4711]: I1202 10:33:49.583051 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5a4b1357-91ed-4ef7-85f5-9b52085ce952","Type":"ContainerStarted","Data":"2c965b30c50bf5d06477034a92f6f7507ccb8ebc474575c6f018a9d23dee9b81"} Dec 02 10:33:49 crc kubenswrapper[4711]: I1202 10:33:49.756722 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-76998c6f5b-xhr78" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.220106 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.220075392 podStartE2EDuration="6.220075392s" podCreationTimestamp="2025-12-02 10:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:49.609782922 +0000 UTC m=+1219.319149359" watchObservedRunningTime="2025-12-02 10:33:52.220075392 +0000 UTC m=+1221.929441879" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.223163 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.225252 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.231350 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.236873 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-q97gg" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.240193 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.240444 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.244021 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.258371 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mqzf\" (UniqueName: \"kubernetes.io/projected/6da5f746-13e6-4933-8b49-ad17165cfcf0-kube-api-access-2mqzf\") pod \"openstackclient\" (UID: \"6da5f746-13e6-4933-8b49-ad17165cfcf0\") " pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.258593 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6da5f746-13e6-4933-8b49-ad17165cfcf0-openstack-config\") pod \"openstackclient\" (UID: \"6da5f746-13e6-4933-8b49-ad17165cfcf0\") " pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.258639 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da5f746-13e6-4933-8b49-ad17165cfcf0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6da5f746-13e6-4933-8b49-ad17165cfcf0\") " pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.258668 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6da5f746-13e6-4933-8b49-ad17165cfcf0-openstack-config-secret\") pod \"openstackclient\" (UID: \"6da5f746-13e6-4933-8b49-ad17165cfcf0\") " pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.360595 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6da5f746-13e6-4933-8b49-ad17165cfcf0-openstack-config\") pod \"openstackclient\" (UID: \"6da5f746-13e6-4933-8b49-ad17165cfcf0\") " pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.360652 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da5f746-13e6-4933-8b49-ad17165cfcf0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6da5f746-13e6-4933-8b49-ad17165cfcf0\") " pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.360674 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6da5f746-13e6-4933-8b49-ad17165cfcf0-openstack-config-secret\") pod \"openstackclient\" (UID: \"6da5f746-13e6-4933-8b49-ad17165cfcf0\") " pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.360765 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mqzf\" (UniqueName: \"kubernetes.io/projected/6da5f746-13e6-4933-8b49-ad17165cfcf0-kube-api-access-2mqzf\") pod \"openstackclient\" (UID: \"6da5f746-13e6-4933-8b49-ad17165cfcf0\") " pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.361716 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6da5f746-13e6-4933-8b49-ad17165cfcf0-openstack-config\") pod \"openstackclient\" (UID: \"6da5f746-13e6-4933-8b49-ad17165cfcf0\") " pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.373685 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6da5f746-13e6-4933-8b49-ad17165cfcf0-openstack-config-secret\") pod \"openstackclient\" (UID: \"6da5f746-13e6-4933-8b49-ad17165cfcf0\") " pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.373702 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da5f746-13e6-4933-8b49-ad17165cfcf0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6da5f746-13e6-4933-8b49-ad17165cfcf0\") " pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.378474 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mqzf\" (UniqueName: \"kubernetes.io/projected/6da5f746-13e6-4933-8b49-ad17165cfcf0-kube-api-access-2mqzf\") pod \"openstackclient\" (UID: \"6da5f746-13e6-4933-8b49-ad17165cfcf0\") " pod="openstack/openstackclient" Dec 02 10:33:52 crc kubenswrapper[4711]: I1202 10:33:52.545829 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.075921 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.306334 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.306765 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="sg-core" containerID="cri-o://a0291ad592b5bc9d091c1ec1c0d08c1c41448703242d09064e003f77bf954c3a" gracePeriod=30 Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.306817 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="proxy-httpd" containerID="cri-o://1008b4a91161580b41c6d778c0398668db98dc3893145ba493d576ca94acc79a" gracePeriod=30 Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.307005 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="ceilometer-notification-agent" containerID="cri-o://b64cbc1d219a1320cf549b465a4bc6f8442511eb0eec73d9813988e5b1613852" gracePeriod=30 Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.307305 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="ceilometer-central-agent" containerID="cri-o://1248f05c73ce0243ea7bed4336b7491f7cea3adf2fd9fdaf8d829dd99ddc2f4f" gracePeriod=30 Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.408710 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.166:3000/\": read tcp 10.217.0.2:36686->10.217.0.166:3000: read: connection reset by peer" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.513560 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-8455cffcc7-gvzs8"] Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.515107 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.517137 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.517139 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.519127 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.536496 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8455cffcc7-gvzs8"] Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.596464 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bb0ebbe-23dd-4970-bc78-799616ef2e21-public-tls-certs\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.596592 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-658v5\" (UniqueName: \"kubernetes.io/projected/4bb0ebbe-23dd-4970-bc78-799616ef2e21-kube-api-access-658v5\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.596626 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bb0ebbe-23dd-4970-bc78-799616ef2e21-log-httpd\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.596692 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4bb0ebbe-23dd-4970-bc78-799616ef2e21-etc-swift\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.596727 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bb0ebbe-23dd-4970-bc78-799616ef2e21-run-httpd\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.596806 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bb0ebbe-23dd-4970-bc78-799616ef2e21-internal-tls-certs\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.596851 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb0ebbe-23dd-4970-bc78-799616ef2e21-config-data\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.596904 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb0ebbe-23dd-4970-bc78-799616ef2e21-combined-ca-bundle\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.620833 4711 generic.go:334] "Generic (PLEG): container finished" podID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerID="1008b4a91161580b41c6d778c0398668db98dc3893145ba493d576ca94acc79a" exitCode=0 Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.620866 4711 generic.go:334] "Generic (PLEG): container finished" podID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerID="a0291ad592b5bc9d091c1ec1c0d08c1c41448703242d09064e003f77bf954c3a" exitCode=2 Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.620910 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1413d8be-89a1-43be-a1b6-b8072da4af1b","Type":"ContainerDied","Data":"1008b4a91161580b41c6d778c0398668db98dc3893145ba493d576ca94acc79a"} Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.620937 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1413d8be-89a1-43be-a1b6-b8072da4af1b","Type":"ContainerDied","Data":"a0291ad592b5bc9d091c1ec1c0d08c1c41448703242d09064e003f77bf954c3a"} Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.623279 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6da5f746-13e6-4933-8b49-ad17165cfcf0","Type":"ContainerStarted","Data":"14ba9c5264bf40d1e7743da10e57e2f5b4259332c4242dca0538f4be738a28c1"} Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.698854 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bb0ebbe-23dd-4970-bc78-799616ef2e21-internal-tls-certs\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.698931 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb0ebbe-23dd-4970-bc78-799616ef2e21-config-data\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.699007 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb0ebbe-23dd-4970-bc78-799616ef2e21-combined-ca-bundle\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.699055 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bb0ebbe-23dd-4970-bc78-799616ef2e21-public-tls-certs\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.699132 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-658v5\" (UniqueName: \"kubernetes.io/projected/4bb0ebbe-23dd-4970-bc78-799616ef2e21-kube-api-access-658v5\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.699174 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bb0ebbe-23dd-4970-bc78-799616ef2e21-log-httpd\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.699229 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4bb0ebbe-23dd-4970-bc78-799616ef2e21-etc-swift\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.699275 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bb0ebbe-23dd-4970-bc78-799616ef2e21-run-httpd\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.699849 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bb0ebbe-23dd-4970-bc78-799616ef2e21-run-httpd\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.701330 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bb0ebbe-23dd-4970-bc78-799616ef2e21-log-httpd\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.706182 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bb0ebbe-23dd-4970-bc78-799616ef2e21-internal-tls-certs\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.707018 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bb0ebbe-23dd-4970-bc78-799616ef2e21-public-tls-certs\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.707400 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4bb0ebbe-23dd-4970-bc78-799616ef2e21-etc-swift\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.707650 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb0ebbe-23dd-4970-bc78-799616ef2e21-combined-ca-bundle\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.708306 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb0ebbe-23dd-4970-bc78-799616ef2e21-config-data\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.727743 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-658v5\" (UniqueName: \"kubernetes.io/projected/4bb0ebbe-23dd-4970-bc78-799616ef2e21-kube-api-access-658v5\") pod \"swift-proxy-8455cffcc7-gvzs8\" (UID: \"4bb0ebbe-23dd-4970-bc78-799616ef2e21\") " pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:53 crc kubenswrapper[4711]: I1202 10:33:53.883562 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:54 crc kubenswrapper[4711]: I1202 10:33:54.435412 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8455cffcc7-gvzs8"] Dec 02 10:33:54 crc kubenswrapper[4711]: W1202 10:33:54.449750 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bb0ebbe_23dd_4970_bc78_799616ef2e21.slice/crio-4539f2d8c28a95d584e7c01ffff79ce181831332fb841de60deddd3cc5ba02d6 WatchSource:0}: Error finding container 4539f2d8c28a95d584e7c01ffff79ce181831332fb841de60deddd3cc5ba02d6: Status 404 returned error can't find the container with id 4539f2d8c28a95d584e7c01ffff79ce181831332fb841de60deddd3cc5ba02d6 Dec 02 10:33:54 crc kubenswrapper[4711]: I1202 10:33:54.638176 4711 generic.go:334] "Generic (PLEG): container finished" podID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerID="1248f05c73ce0243ea7bed4336b7491f7cea3adf2fd9fdaf8d829dd99ddc2f4f" exitCode=0 Dec 02 10:33:54 crc kubenswrapper[4711]: I1202 10:33:54.638247 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1413d8be-89a1-43be-a1b6-b8072da4af1b","Type":"ContainerDied","Data":"1248f05c73ce0243ea7bed4336b7491f7cea3adf2fd9fdaf8d829dd99ddc2f4f"} Dec 02 10:33:54 crc kubenswrapper[4711]: I1202 10:33:54.639174 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8455cffcc7-gvzs8" event={"ID":"4bb0ebbe-23dd-4970-bc78-799616ef2e21","Type":"ContainerStarted","Data":"4539f2d8c28a95d584e7c01ffff79ce181831332fb841de60deddd3cc5ba02d6"} Dec 02 10:33:55 crc kubenswrapper[4711]: I1202 10:33:55.655671 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8455cffcc7-gvzs8" event={"ID":"4bb0ebbe-23dd-4970-bc78-799616ef2e21","Type":"ContainerStarted","Data":"92ab22b7885c0e8192934650a7149047e3ad5f8c7efac43725a996c8c50ad1f2"} Dec 02 10:33:55 crc kubenswrapper[4711]: I1202 10:33:55.655940 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8455cffcc7-gvzs8" event={"ID":"4bb0ebbe-23dd-4970-bc78-799616ef2e21","Type":"ContainerStarted","Data":"52b3b995a35fe5c9e130b45d08a9f0edbeb84279d7ead0663c224feaa2d642f3"} Dec 02 10:33:55 crc kubenswrapper[4711]: I1202 10:33:55.656906 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:55 crc kubenswrapper[4711]: I1202 10:33:55.656932 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:33:55 crc kubenswrapper[4711]: I1202 10:33:55.686145 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-8455cffcc7-gvzs8" podStartSLOduration=2.686120464 podStartE2EDuration="2.686120464s" podCreationTimestamp="2025-12-02 10:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:33:55.674023724 +0000 UTC m=+1225.383390171" watchObservedRunningTime="2025-12-02 10:33:55.686120464 +0000 UTC m=+1225.395486911" Dec 02 10:33:57 crc kubenswrapper[4711]: I1202 10:33:57.431275 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 02 10:33:57 crc kubenswrapper[4711]: I1202 10:33:57.676256 4711 generic.go:334] "Generic (PLEG): container finished" podID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerID="b64cbc1d219a1320cf549b465a4bc6f8442511eb0eec73d9813988e5b1613852" exitCode=0 Dec 02 10:33:57 crc kubenswrapper[4711]: I1202 10:33:57.676324 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1413d8be-89a1-43be-a1b6-b8072da4af1b","Type":"ContainerDied","Data":"b64cbc1d219a1320cf549b465a4bc6f8442511eb0eec73d9813988e5b1613852"} Dec 02 10:33:59 crc kubenswrapper[4711]: I1202 10:33:59.756538 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-76998c6f5b-xhr78" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 02 10:33:59 crc kubenswrapper[4711]: I1202 10:33:59.756733 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.832601 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.902824 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1413d8be-89a1-43be-a1b6-b8072da4af1b-run-httpd\") pod \"1413d8be-89a1-43be-a1b6-b8072da4af1b\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.902883 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-sg-core-conf-yaml\") pod \"1413d8be-89a1-43be-a1b6-b8072da4af1b\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.902938 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlcjz\" (UniqueName: \"kubernetes.io/projected/1413d8be-89a1-43be-a1b6-b8072da4af1b-kube-api-access-hlcjz\") pod \"1413d8be-89a1-43be-a1b6-b8072da4af1b\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.903017 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-combined-ca-bundle\") pod \"1413d8be-89a1-43be-a1b6-b8072da4af1b\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.903131 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-scripts\") pod \"1413d8be-89a1-43be-a1b6-b8072da4af1b\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.903164 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1413d8be-89a1-43be-a1b6-b8072da4af1b-log-httpd\") pod \"1413d8be-89a1-43be-a1b6-b8072da4af1b\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.903204 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-config-data\") pod \"1413d8be-89a1-43be-a1b6-b8072da4af1b\" (UID: \"1413d8be-89a1-43be-a1b6-b8072da4af1b\") " Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.909364 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1413d8be-89a1-43be-a1b6-b8072da4af1b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1413d8be-89a1-43be-a1b6-b8072da4af1b" (UID: "1413d8be-89a1-43be-a1b6-b8072da4af1b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.909381 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1413d8be-89a1-43be-a1b6-b8072da4af1b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1413d8be-89a1-43be-a1b6-b8072da4af1b" (UID: "1413d8be-89a1-43be-a1b6-b8072da4af1b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.919004 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1413d8be-89a1-43be-a1b6-b8072da4af1b-kube-api-access-hlcjz" (OuterVolumeSpecName: "kube-api-access-hlcjz") pod "1413d8be-89a1-43be-a1b6-b8072da4af1b" (UID: "1413d8be-89a1-43be-a1b6-b8072da4af1b"). InnerVolumeSpecName "kube-api-access-hlcjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.921196 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-scripts" (OuterVolumeSpecName: "scripts") pod "1413d8be-89a1-43be-a1b6-b8072da4af1b" (UID: "1413d8be-89a1-43be-a1b6-b8072da4af1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:02 crc kubenswrapper[4711]: I1202 10:34:02.950567 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1413d8be-89a1-43be-a1b6-b8072da4af1b" (UID: "1413d8be-89a1-43be-a1b6-b8072da4af1b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.005048 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1413d8be-89a1-43be-a1b6-b8072da4af1b" (UID: "1413d8be-89a1-43be-a1b6-b8072da4af1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.005402 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.005430 4711 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1413d8be-89a1-43be-a1b6-b8072da4af1b-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.005444 4711 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1413d8be-89a1-43be-a1b6-b8072da4af1b-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.005454 4711 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.005468 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlcjz\" (UniqueName: \"kubernetes.io/projected/1413d8be-89a1-43be-a1b6-b8072da4af1b-kube-api-access-hlcjz\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.042315 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-config-data" (OuterVolumeSpecName: "config-data") pod "1413d8be-89a1-43be-a1b6-b8072da4af1b" (UID: "1413d8be-89a1-43be-a1b6-b8072da4af1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.107220 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.107247 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1413d8be-89a1-43be-a1b6-b8072da4af1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.418938 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-ctfhf"] Dec 02 10:34:03 crc kubenswrapper[4711]: E1202 10:34:03.419357 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="proxy-httpd" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.419374 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="proxy-httpd" Dec 02 10:34:03 crc kubenswrapper[4711]: E1202 10:34:03.419396 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="ceilometer-notification-agent" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.419403 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="ceilometer-notification-agent" Dec 02 10:34:03 crc kubenswrapper[4711]: E1202 10:34:03.419414 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="sg-core" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.419420 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="sg-core" Dec 02 10:34:03 crc kubenswrapper[4711]: E1202 10:34:03.419433 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="ceilometer-central-agent" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.419439 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="ceilometer-central-agent" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.419625 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="ceilometer-central-agent" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.419635 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="sg-core" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.419646 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="ceilometer-notification-agent" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.419657 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" containerName="proxy-httpd" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.420302 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ctfhf" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.429160 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ctfhf"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.514358 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/078d4918-1bcb-4825-a5e4-4a4130593668-operator-scripts\") pod \"nova-api-db-create-ctfhf\" (UID: \"078d4918-1bcb-4825-a5e4-4a4130593668\") " pod="openstack/nova-api-db-create-ctfhf" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.514505 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9f4m\" (UniqueName: \"kubernetes.io/projected/078d4918-1bcb-4825-a5e4-4a4130593668-kube-api-access-j9f4m\") pod \"nova-api-db-create-ctfhf\" (UID: \"078d4918-1bcb-4825-a5e4-4a4130593668\") " pod="openstack/nova-api-db-create-ctfhf" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.523507 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wghts"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.524990 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wghts" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.533163 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wghts"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.562112 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-845d-account-create-update-26jgv"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.564812 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-845d-account-create-update-26jgv" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.567225 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.573288 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-845d-account-create-update-26jgv"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.616322 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf-operator-scripts\") pod \"nova-api-845d-account-create-update-26jgv\" (UID: \"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf\") " pod="openstack/nova-api-845d-account-create-update-26jgv" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.616427 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/078d4918-1bcb-4825-a5e4-4a4130593668-operator-scripts\") pod \"nova-api-db-create-ctfhf\" (UID: \"078d4918-1bcb-4825-a5e4-4a4130593668\") " pod="openstack/nova-api-db-create-ctfhf" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.616453 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v94s\" (UniqueName: \"kubernetes.io/projected/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf-kube-api-access-9v94s\") pod \"nova-api-845d-account-create-update-26jgv\" (UID: \"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf\") " pod="openstack/nova-api-845d-account-create-update-26jgv" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.616481 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a610c00-47f7-4347-8890-f5392ae78555-operator-scripts\") pod \"nova-cell0-db-create-wghts\" (UID: \"8a610c00-47f7-4347-8890-f5392ae78555\") " pod="openstack/nova-cell0-db-create-wghts" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.616751 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx72j\" (UniqueName: \"kubernetes.io/projected/8a610c00-47f7-4347-8890-f5392ae78555-kube-api-access-sx72j\") pod \"nova-cell0-db-create-wghts\" (UID: \"8a610c00-47f7-4347-8890-f5392ae78555\") " pod="openstack/nova-cell0-db-create-wghts" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.616887 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9f4m\" (UniqueName: \"kubernetes.io/projected/078d4918-1bcb-4825-a5e4-4a4130593668-kube-api-access-j9f4m\") pod \"nova-api-db-create-ctfhf\" (UID: \"078d4918-1bcb-4825-a5e4-4a4130593668\") " pod="openstack/nova-api-db-create-ctfhf" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.617927 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/078d4918-1bcb-4825-a5e4-4a4130593668-operator-scripts\") pod \"nova-api-db-create-ctfhf\" (UID: \"078d4918-1bcb-4825-a5e4-4a4130593668\") " pod="openstack/nova-api-db-create-ctfhf" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.636873 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9f4m\" (UniqueName: \"kubernetes.io/projected/078d4918-1bcb-4825-a5e4-4a4130593668-kube-api-access-j9f4m\") pod \"nova-api-db-create-ctfhf\" (UID: \"078d4918-1bcb-4825-a5e4-4a4130593668\") " pod="openstack/nova-api-db-create-ctfhf" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.718690 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx72j\" (UniqueName: \"kubernetes.io/projected/8a610c00-47f7-4347-8890-f5392ae78555-kube-api-access-sx72j\") pod \"nova-cell0-db-create-wghts\" (UID: \"8a610c00-47f7-4347-8890-f5392ae78555\") " pod="openstack/nova-cell0-db-create-wghts" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.718781 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf-operator-scripts\") pod \"nova-api-845d-account-create-update-26jgv\" (UID: \"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf\") " pod="openstack/nova-api-845d-account-create-update-26jgv" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.718868 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v94s\" (UniqueName: \"kubernetes.io/projected/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf-kube-api-access-9v94s\") pod \"nova-api-845d-account-create-update-26jgv\" (UID: \"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf\") " pod="openstack/nova-api-845d-account-create-update-26jgv" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.718900 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a610c00-47f7-4347-8890-f5392ae78555-operator-scripts\") pod \"nova-cell0-db-create-wghts\" (UID: \"8a610c00-47f7-4347-8890-f5392ae78555\") " pod="openstack/nova-cell0-db-create-wghts" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.719594 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf-operator-scripts\") pod \"nova-api-845d-account-create-update-26jgv\" (UID: \"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf\") " pod="openstack/nova-api-845d-account-create-update-26jgv" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.719671 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a610c00-47f7-4347-8890-f5392ae78555-operator-scripts\") pod \"nova-cell0-db-create-wghts\" (UID: \"8a610c00-47f7-4347-8890-f5392ae78555\") " pod="openstack/nova-cell0-db-create-wghts" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.726334 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-q64zj"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.727869 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q64zj" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.735265 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ctfhf" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.740438 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx72j\" (UniqueName: \"kubernetes.io/projected/8a610c00-47f7-4347-8890-f5392ae78555-kube-api-access-sx72j\") pod \"nova-cell0-db-create-wghts\" (UID: \"8a610c00-47f7-4347-8890-f5392ae78555\") " pod="openstack/nova-cell0-db-create-wghts" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.742098 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1413d8be-89a1-43be-a1b6-b8072da4af1b","Type":"ContainerDied","Data":"1a0e91b3cb86c6f0ea05823c83b5ce7d09dc668287464384349b7897b95ac559"} Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.742162 4711 scope.go:117] "RemoveContainer" containerID="1008b4a91161580b41c6d778c0398668db98dc3893145ba493d576ca94acc79a" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.742175 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.742563 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q64zj"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.744092 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6da5f746-13e6-4933-8b49-ad17165cfcf0","Type":"ContainerStarted","Data":"c3e1b693179eb9f0f2bc4448ab3ab86e93053f754659bc67a943ea33c08a89ac"} Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.782521 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v94s\" (UniqueName: \"kubernetes.io/projected/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf-kube-api-access-9v94s\") pod \"nova-api-845d-account-create-update-26jgv\" (UID: \"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf\") " pod="openstack/nova-api-845d-account-create-update-26jgv" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.799117 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-725c-account-create-update-ncz7n"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.801402 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-725c-account-create-update-ncz7n" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.809829 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.822899 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcc5q\" (UniqueName: \"kubernetes.io/projected/1f1c4aab-5848-4313-a0b0-b95d2a2f660d-kube-api-access-wcc5q\") pod \"nova-cell1-db-create-q64zj\" (UID: \"1f1c4aab-5848-4313-a0b0-b95d2a2f660d\") " pod="openstack/nova-cell1-db-create-q64zj" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.822985 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f1c4aab-5848-4313-a0b0-b95d2a2f660d-operator-scripts\") pod \"nova-cell1-db-create-q64zj\" (UID: \"1f1c4aab-5848-4313-a0b0-b95d2a2f660d\") " pod="openstack/nova-cell1-db-create-q64zj" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.831300 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-725c-account-create-update-ncz7n"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.831800 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.330834596 podStartE2EDuration="11.831791012s" podCreationTimestamp="2025-12-02 10:33:52 +0000 UTC" firstStartedPulling="2025-12-02 10:33:53.075658308 +0000 UTC m=+1222.785024755" lastFinishedPulling="2025-12-02 10:34:02.576614704 +0000 UTC m=+1232.285981171" observedRunningTime="2025-12-02 10:34:03.783154159 +0000 UTC m=+1233.492520606" watchObservedRunningTime="2025-12-02 10:34:03.831791012 +0000 UTC m=+1233.541157459" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.840258 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wghts" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.885299 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.886100 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-845d-account-create-update-26jgv" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.909231 4711 scope.go:117] "RemoveContainer" containerID="a0291ad592b5bc9d091c1ec1c0d08c1c41448703242d09064e003f77bf954c3a" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.928584 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.928654 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.930572 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcc5q\" (UniqueName: \"kubernetes.io/projected/1f1c4aab-5848-4313-a0b0-b95d2a2f660d-kube-api-access-wcc5q\") pod \"nova-cell1-db-create-q64zj\" (UID: \"1f1c4aab-5848-4313-a0b0-b95d2a2f660d\") " pod="openstack/nova-cell1-db-create-q64zj" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.930618 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f1c4aab-5848-4313-a0b0-b95d2a2f660d-operator-scripts\") pod \"nova-cell1-db-create-q64zj\" (UID: \"1f1c4aab-5848-4313-a0b0-b95d2a2f660d\") " pod="openstack/nova-cell1-db-create-q64zj" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.930653 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zs7q\" (UniqueName: \"kubernetes.io/projected/5f697b83-8112-40d8-a328-bfc157ffcdde-kube-api-access-7zs7q\") pod \"nova-cell0-725c-account-create-update-ncz7n\" (UID: \"5f697b83-8112-40d8-a328-bfc157ffcdde\") " pod="openstack/nova-cell0-725c-account-create-update-ncz7n" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.930739 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f697b83-8112-40d8-a328-bfc157ffcdde-operator-scripts\") pod \"nova-cell0-725c-account-create-update-ncz7n\" (UID: \"5f697b83-8112-40d8-a328-bfc157ffcdde\") " pod="openstack/nova-cell0-725c-account-create-update-ncz7n" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.935863 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f1c4aab-5848-4313-a0b0-b95d2a2f660d-operator-scripts\") pod \"nova-cell1-db-create-q64zj\" (UID: \"1f1c4aab-5848-4313-a0b0-b95d2a2f660d\") " pod="openstack/nova-cell1-db-create-q64zj" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.943223 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8455cffcc7-gvzs8" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.955758 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.958831 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.965706 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.969161 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.978097 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.984143 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcc5q\" (UniqueName: \"kubernetes.io/projected/1f1c4aab-5848-4313-a0b0-b95d2a2f660d-kube-api-access-wcc5q\") pod \"nova-cell1-db-create-q64zj\" (UID: \"1f1c4aab-5848-4313-a0b0-b95d2a2f660d\") " pod="openstack/nova-cell1-db-create-q64zj" Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.993125 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6e60-account-create-update-mfx7j"] Dec 02 10:34:03 crc kubenswrapper[4711]: I1202 10:34:03.994509 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:03.997173 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.003150 4711 scope.go:117] "RemoveContainer" containerID="b64cbc1d219a1320cf549b465a4bc6f8442511eb0eec73d9813988e5b1613852" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.022061 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6e60-account-create-update-mfx7j"] Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.032447 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.032483 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-scripts\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.032507 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zs7q\" (UniqueName: \"kubernetes.io/projected/5f697b83-8112-40d8-a328-bfc157ffcdde-kube-api-access-7zs7q\") pod \"nova-cell0-725c-account-create-update-ncz7n\" (UID: \"5f697b83-8112-40d8-a328-bfc157ffcdde\") " pod="openstack/nova-cell0-725c-account-create-update-ncz7n" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.032534 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-run-httpd\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.032560 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7p5c\" (UniqueName: \"kubernetes.io/projected/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-kube-api-access-v7p5c\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.032599 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.032656 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-config-data\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.032686 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f697b83-8112-40d8-a328-bfc157ffcdde-operator-scripts\") pod \"nova-cell0-725c-account-create-update-ncz7n\" (UID: \"5f697b83-8112-40d8-a328-bfc157ffcdde\") " pod="openstack/nova-cell0-725c-account-create-update-ncz7n" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.032754 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-log-httpd\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.038852 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f697b83-8112-40d8-a328-bfc157ffcdde-operator-scripts\") pod \"nova-cell0-725c-account-create-update-ncz7n\" (UID: \"5f697b83-8112-40d8-a328-bfc157ffcdde\") " pod="openstack/nova-cell0-725c-account-create-update-ncz7n" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.051760 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zs7q\" (UniqueName: \"kubernetes.io/projected/5f697b83-8112-40d8-a328-bfc157ffcdde-kube-api-access-7zs7q\") pod \"nova-cell0-725c-account-create-update-ncz7n\" (UID: \"5f697b83-8112-40d8-a328-bfc157ffcdde\") " pod="openstack/nova-cell0-725c-account-create-update-ncz7n" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.072609 4711 scope.go:117] "RemoveContainer" containerID="1248f05c73ce0243ea7bed4336b7491f7cea3adf2fd9fdaf8d829dd99ddc2f4f" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.135553 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f23d852-29f7-4a94-8fb2-05115f135ac3-operator-scripts\") pod \"nova-cell1-6e60-account-create-update-mfx7j\" (UID: \"2f23d852-29f7-4a94-8fb2-05115f135ac3\") " pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.135903 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-log-httpd\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.136027 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfc6f\" (UniqueName: \"kubernetes.io/projected/2f23d852-29f7-4a94-8fb2-05115f135ac3-kube-api-access-nfc6f\") pod \"nova-cell1-6e60-account-create-update-mfx7j\" (UID: \"2f23d852-29f7-4a94-8fb2-05115f135ac3\") " pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.136088 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.136111 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-scripts\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.136162 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-run-httpd\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.136207 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7p5c\" (UniqueName: \"kubernetes.io/projected/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-kube-api-access-v7p5c\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.136258 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.136468 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-config-data\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.138355 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-run-httpd\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.138687 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-log-httpd\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.140561 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.142403 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-config-data\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.143624 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-scripts\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.145381 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.152887 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q64zj" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.155326 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7p5c\" (UniqueName: \"kubernetes.io/projected/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-kube-api-access-v7p5c\") pod \"ceilometer-0\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.186586 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-725c-account-create-update-ncz7n" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.237688 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f23d852-29f7-4a94-8fb2-05115f135ac3-operator-scripts\") pod \"nova-cell1-6e60-account-create-update-mfx7j\" (UID: \"2f23d852-29f7-4a94-8fb2-05115f135ac3\") " pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.237774 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfc6f\" (UniqueName: \"kubernetes.io/projected/2f23d852-29f7-4a94-8fb2-05115f135ac3-kube-api-access-nfc6f\") pod \"nova-cell1-6e60-account-create-update-mfx7j\" (UID: \"2f23d852-29f7-4a94-8fb2-05115f135ac3\") " pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.239042 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f23d852-29f7-4a94-8fb2-05115f135ac3-operator-scripts\") pod \"nova-cell1-6e60-account-create-update-mfx7j\" (UID: \"2f23d852-29f7-4a94-8fb2-05115f135ac3\") " pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.255587 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfc6f\" (UniqueName: \"kubernetes.io/projected/2f23d852-29f7-4a94-8fb2-05115f135ac3-kube-api-access-nfc6f\") pod \"nova-cell1-6e60-account-create-update-mfx7j\" (UID: \"2f23d852-29f7-4a94-8fb2-05115f135ac3\") " pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.315036 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.325759 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.354418 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ctfhf"] Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.463463 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wghts"] Dec 02 10:34:04 crc kubenswrapper[4711]: W1202 10:34:04.492129 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a610c00_47f7_4347_8890_f5392ae78555.slice/crio-fd60526718c876188e971c7b5717e396046d319288757d0eeae0e106a0fb6859 WatchSource:0}: Error finding container fd60526718c876188e971c7b5717e396046d319288757d0eeae0e106a0fb6859: Status 404 returned error can't find the container with id fd60526718c876188e971c7b5717e396046d319288757d0eeae0e106a0fb6859 Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.623789 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-845d-account-create-update-26jgv"] Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.764912 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-725c-account-create-update-ncz7n"] Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.777149 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wghts" event={"ID":"8a610c00-47f7-4347-8890-f5392ae78555","Type":"ContainerStarted","Data":"a06aaad519ffb24a41da223839471d01c0b1e3ebb9ecf9c1070636b43a7ae778"} Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.777198 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wghts" event={"ID":"8a610c00-47f7-4347-8890-f5392ae78555","Type":"ContainerStarted","Data":"fd60526718c876188e971c7b5717e396046d319288757d0eeae0e106a0fb6859"} Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.784896 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q64zj"] Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.785202 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ctfhf" event={"ID":"078d4918-1bcb-4825-a5e4-4a4130593668","Type":"ContainerStarted","Data":"b0d8f709a1180a24670476673d026c9be7797a5b21c52db35258281e221088ad"} Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.785225 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ctfhf" event={"ID":"078d4918-1bcb-4825-a5e4-4a4130593668","Type":"ContainerStarted","Data":"3691dd785a50ab0c160456cf9afc58f08442a07b4eeb40bcb42e56af88c9795c"} Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.795785 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-wghts" podStartSLOduration=1.795766758 podStartE2EDuration="1.795766758s" podCreationTimestamp="2025-12-02 10:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:04.790190627 +0000 UTC m=+1234.499557074" watchObservedRunningTime="2025-12-02 10:34:04.795766758 +0000 UTC m=+1234.505133205" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.804458 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-845d-account-create-update-26jgv" event={"ID":"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf","Type":"ContainerStarted","Data":"4873a866b841a1a0754e757cce2aacc1605462039dd96377720641136eaba753"} Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.822710 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-ctfhf" podStartSLOduration=1.822693048 podStartE2EDuration="1.822693048s" podCreationTimestamp="2025-12-02 10:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:04.80872596 +0000 UTC m=+1234.518092407" watchObservedRunningTime="2025-12-02 10:34:04.822693048 +0000 UTC m=+1234.532059495" Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.912659 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:04 crc kubenswrapper[4711]: W1202 10:34:04.915921 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f23d852_29f7_4a94_8fb2_05115f135ac3.slice/crio-dec6f3f72a6833dfc75e08e3acfa2f22351705e31707c5498445a7bd57c91fdc WatchSource:0}: Error finding container dec6f3f72a6833dfc75e08e3acfa2f22351705e31707c5498445a7bd57c91fdc: Status 404 returned error can't find the container with id dec6f3f72a6833dfc75e08e3acfa2f22351705e31707c5498445a7bd57c91fdc Dec 02 10:34:04 crc kubenswrapper[4711]: I1202 10:34:04.929449 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6e60-account-create-update-mfx7j"] Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.089922 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1413d8be-89a1-43be-a1b6-b8072da4af1b" path="/var/lib/kubelet/pods/1413d8be-89a1-43be-a1b6-b8072da4af1b/volumes" Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.132683 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.816302 4711 generic.go:334] "Generic (PLEG): container finished" podID="e05d72c3-0be9-40ae-aa5d-f24c5b02adbf" containerID="983849341a8cb019b6b6b94dc04813682d57af7c0826a5d4d952d0c0f506ae5b" exitCode=0 Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.816375 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-845d-account-create-update-26jgv" event={"ID":"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf","Type":"ContainerDied","Data":"983849341a8cb019b6b6b94dc04813682d57af7c0826a5d4d952d0c0f506ae5b"} Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.819475 4711 generic.go:334] "Generic (PLEG): container finished" podID="2f23d852-29f7-4a94-8fb2-05115f135ac3" containerID="459858a58d3af25ea8ff08e1be5694b1437081792db913adae01b827e977e039" exitCode=0 Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.819515 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" event={"ID":"2f23d852-29f7-4a94-8fb2-05115f135ac3","Type":"ContainerDied","Data":"459858a58d3af25ea8ff08e1be5694b1437081792db913adae01b827e977e039"} Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.819562 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" event={"ID":"2f23d852-29f7-4a94-8fb2-05115f135ac3","Type":"ContainerStarted","Data":"dec6f3f72a6833dfc75e08e3acfa2f22351705e31707c5498445a7bd57c91fdc"} Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.822762 4711 generic.go:334] "Generic (PLEG): container finished" podID="8a610c00-47f7-4347-8890-f5392ae78555" containerID="a06aaad519ffb24a41da223839471d01c0b1e3ebb9ecf9c1070636b43a7ae778" exitCode=0 Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.822843 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wghts" event={"ID":"8a610c00-47f7-4347-8890-f5392ae78555","Type":"ContainerDied","Data":"a06aaad519ffb24a41da223839471d01c0b1e3ebb9ecf9c1070636b43a7ae778"} Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.824641 4711 generic.go:334] "Generic (PLEG): container finished" podID="5f697b83-8112-40d8-a328-bfc157ffcdde" containerID="4f2026f9819e0f7714463adaef34a8a489024f3cc7c1fc2a82fb712506e043ae" exitCode=0 Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.824698 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-725c-account-create-update-ncz7n" event={"ID":"5f697b83-8112-40d8-a328-bfc157ffcdde","Type":"ContainerDied","Data":"4f2026f9819e0f7714463adaef34a8a489024f3cc7c1fc2a82fb712506e043ae"} Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.824719 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-725c-account-create-update-ncz7n" event={"ID":"5f697b83-8112-40d8-a328-bfc157ffcdde","Type":"ContainerStarted","Data":"df98b24b6bd6ed157ed6116024ec0d97b9f742e5eba0316ed449d02b92b0a079"} Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.826633 4711 generic.go:334] "Generic (PLEG): container finished" podID="1f1c4aab-5848-4313-a0b0-b95d2a2f660d" containerID="da0fdf789d92ae9490e4afdf53b0d5eb2f73f32309daacf9b6504e17fe96a7df" exitCode=0 Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.826692 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q64zj" event={"ID":"1f1c4aab-5848-4313-a0b0-b95d2a2f660d","Type":"ContainerDied","Data":"da0fdf789d92ae9490e4afdf53b0d5eb2f73f32309daacf9b6504e17fe96a7df"} Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.826713 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q64zj" event={"ID":"1f1c4aab-5848-4313-a0b0-b95d2a2f660d","Type":"ContainerStarted","Data":"0dd9c0905619b4083a23ffc9de39c59106b4458d944ff39b20b1f3032f350b15"} Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.829609 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbd96160-d98f-4ec2-ab12-dab3a57e65ae","Type":"ContainerStarted","Data":"a5b39f4039b1eac3405a5b520eb836774eca40606d6a548a99ba6e1eea243834"} Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.829655 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbd96160-d98f-4ec2-ab12-dab3a57e65ae","Type":"ContainerStarted","Data":"f4dd0b2345892750126631b5f84bb961eda03f7cec9509d846e47e70049d3c82"} Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.834814 4711 generic.go:334] "Generic (PLEG): container finished" podID="078d4918-1bcb-4825-a5e4-4a4130593668" containerID="b0d8f709a1180a24670476673d026c9be7797a5b21c52db35258281e221088ad" exitCode=0 Dec 02 10:34:05 crc kubenswrapper[4711]: I1202 10:34:05.834868 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ctfhf" event={"ID":"078d4918-1bcb-4825-a5e4-4a4130593668","Type":"ContainerDied","Data":"b0d8f709a1180a24670476673d026c9be7797a5b21c52db35258281e221088ad"} Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.730200 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.790348 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-config-data\") pod \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.790432 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-scripts\") pod \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.790476 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-logs\") pod \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.790705 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4llp2\" (UniqueName: \"kubernetes.io/projected/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-kube-api-access-4llp2\") pod \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.790767 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-horizon-secret-key\") pod \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.790881 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-horizon-tls-certs\") pod \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.791011 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-logs" (OuterVolumeSpecName: "logs") pod "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" (UID: "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.791604 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-combined-ca-bundle\") pod \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\" (UID: \"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd\") " Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.792034 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.796669 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-kube-api-access-4llp2" (OuterVolumeSpecName: "kube-api-access-4llp2") pod "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" (UID: "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd"). InnerVolumeSpecName "kube-api-access-4llp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.828799 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-scripts" (OuterVolumeSpecName: "scripts") pod "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" (UID: "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.845014 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" (UID: "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.848875 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-config-data" (OuterVolumeSpecName: "config-data") pod "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" (UID: "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.859369 4711 generic.go:334] "Generic (PLEG): container finished" podID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerID="fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571" exitCode=137 Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.859471 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76998c6f5b-xhr78" event={"ID":"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd","Type":"ContainerDied","Data":"fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571"} Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.859511 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76998c6f5b-xhr78" event={"ID":"b9435ea7-574e-4a04-ad38-aa7a1cd82ebd","Type":"ContainerDied","Data":"7beed1c54625f990a56f45443cb1ec2cf01afe76157982c7e8bf8211fd77e870"} Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.859540 4711 scope.go:117] "RemoveContainer" containerID="889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.859731 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76998c6f5b-xhr78" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.875068 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" (UID: "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.877008 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbd96160-d98f-4ec2-ab12-dab3a57e65ae","Type":"ContainerStarted","Data":"cb3360c950750ad7855b1aea2bbf1b6ab2fe6f3315077351cce265ce60987d8d"} Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.894350 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.894387 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.894401 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4llp2\" (UniqueName: \"kubernetes.io/projected/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-kube-api-access-4llp2\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.894416 4711 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.894428 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.898154 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" (UID: "b9435ea7-574e-4a04-ad38-aa7a1cd82ebd"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:06 crc kubenswrapper[4711]: I1202 10:34:06.996186 4711 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.128111 4711 scope.go:117] "RemoveContainer" containerID="fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.153194 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-845d-account-create-update-26jgv" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.198938 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76998c6f5b-xhr78"] Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.203194 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf-operator-scripts\") pod \"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf\" (UID: \"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf\") " Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.203278 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v94s\" (UniqueName: \"kubernetes.io/projected/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf-kube-api-access-9v94s\") pod \"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf\" (UID: \"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf\") " Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.205499 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e05d72c3-0be9-40ae-aa5d-f24c5b02adbf" (UID: "e05d72c3-0be9-40ae-aa5d-f24c5b02adbf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.208682 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf-kube-api-access-9v94s" (OuterVolumeSpecName: "kube-api-access-9v94s") pod "e05d72c3-0be9-40ae-aa5d-f24c5b02adbf" (UID: "e05d72c3-0be9-40ae-aa5d-f24c5b02adbf"). InnerVolumeSpecName "kube-api-access-9v94s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.210909 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76998c6f5b-xhr78"] Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.302017 4711 scope.go:117] "RemoveContainer" containerID="889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd" Dec 02 10:34:07 crc kubenswrapper[4711]: E1202 10:34:07.302532 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd\": container with ID starting with 889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd not found: ID does not exist" containerID="889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.302596 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd"} err="failed to get container status \"889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd\": rpc error: code = NotFound desc = could not find container \"889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd\": container with ID starting with 889e77a7f104c71f334c9e95cb57adc8351a273945210abf43c6ede9b6fca1dd not found: ID does not exist" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.302655 4711 scope.go:117] "RemoveContainer" containerID="fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571" Dec 02 10:34:07 crc kubenswrapper[4711]: E1202 10:34:07.302980 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571\": container with ID starting with fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571 not found: ID does not exist" containerID="fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.303013 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571"} err="failed to get container status \"fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571\": rpc error: code = NotFound desc = could not find container \"fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571\": container with ID starting with fb81b61e8ded5c1c084e08ac2368949070c72e10a230f94b98f51f3921859571 not found: ID does not exist" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.308738 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.308807 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v94s\" (UniqueName: \"kubernetes.io/projected/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf-kube-api-access-9v94s\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.649386 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ctfhf" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.658591 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wghts" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.664559 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.670195 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-725c-account-create-update-ncz7n" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.676131 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q64zj" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.722015 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a610c00-47f7-4347-8890-f5392ae78555-operator-scripts\") pod \"8a610c00-47f7-4347-8890-f5392ae78555\" (UID: \"8a610c00-47f7-4347-8890-f5392ae78555\") " Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.722145 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfc6f\" (UniqueName: \"kubernetes.io/projected/2f23d852-29f7-4a94-8fb2-05115f135ac3-kube-api-access-nfc6f\") pod \"2f23d852-29f7-4a94-8fb2-05115f135ac3\" (UID: \"2f23d852-29f7-4a94-8fb2-05115f135ac3\") " Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.722198 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9f4m\" (UniqueName: \"kubernetes.io/projected/078d4918-1bcb-4825-a5e4-4a4130593668-kube-api-access-j9f4m\") pod \"078d4918-1bcb-4825-a5e4-4a4130593668\" (UID: \"078d4918-1bcb-4825-a5e4-4a4130593668\") " Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.722227 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f23d852-29f7-4a94-8fb2-05115f135ac3-operator-scripts\") pod \"2f23d852-29f7-4a94-8fb2-05115f135ac3\" (UID: \"2f23d852-29f7-4a94-8fb2-05115f135ac3\") " Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.722247 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/078d4918-1bcb-4825-a5e4-4a4130593668-operator-scripts\") pod \"078d4918-1bcb-4825-a5e4-4a4130593668\" (UID: \"078d4918-1bcb-4825-a5e4-4a4130593668\") " Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.722363 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx72j\" (UniqueName: \"kubernetes.io/projected/8a610c00-47f7-4347-8890-f5392ae78555-kube-api-access-sx72j\") pod \"8a610c00-47f7-4347-8890-f5392ae78555\" (UID: \"8a610c00-47f7-4347-8890-f5392ae78555\") " Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.723822 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a610c00-47f7-4347-8890-f5392ae78555-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a610c00-47f7-4347-8890-f5392ae78555" (UID: "8a610c00-47f7-4347-8890-f5392ae78555"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.724246 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f23d852-29f7-4a94-8fb2-05115f135ac3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f23d852-29f7-4a94-8fb2-05115f135ac3" (UID: "2f23d852-29f7-4a94-8fb2-05115f135ac3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.724415 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/078d4918-1bcb-4825-a5e4-4a4130593668-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "078d4918-1bcb-4825-a5e4-4a4130593668" (UID: "078d4918-1bcb-4825-a5e4-4a4130593668"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.731643 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a610c00-47f7-4347-8890-f5392ae78555-kube-api-access-sx72j" (OuterVolumeSpecName: "kube-api-access-sx72j") pod "8a610c00-47f7-4347-8890-f5392ae78555" (UID: "8a610c00-47f7-4347-8890-f5392ae78555"). InnerVolumeSpecName "kube-api-access-sx72j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.734063 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078d4918-1bcb-4825-a5e4-4a4130593668-kube-api-access-j9f4m" (OuterVolumeSpecName: "kube-api-access-j9f4m") pod "078d4918-1bcb-4825-a5e4-4a4130593668" (UID: "078d4918-1bcb-4825-a5e4-4a4130593668"). InnerVolumeSpecName "kube-api-access-j9f4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.743281 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f23d852-29f7-4a94-8fb2-05115f135ac3-kube-api-access-nfc6f" (OuterVolumeSpecName: "kube-api-access-nfc6f") pod "2f23d852-29f7-4a94-8fb2-05115f135ac3" (UID: "2f23d852-29f7-4a94-8fb2-05115f135ac3"). InnerVolumeSpecName "kube-api-access-nfc6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.824683 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcc5q\" (UniqueName: \"kubernetes.io/projected/1f1c4aab-5848-4313-a0b0-b95d2a2f660d-kube-api-access-wcc5q\") pod \"1f1c4aab-5848-4313-a0b0-b95d2a2f660d\" (UID: \"1f1c4aab-5848-4313-a0b0-b95d2a2f660d\") " Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.824805 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zs7q\" (UniqueName: \"kubernetes.io/projected/5f697b83-8112-40d8-a328-bfc157ffcdde-kube-api-access-7zs7q\") pod \"5f697b83-8112-40d8-a328-bfc157ffcdde\" (UID: \"5f697b83-8112-40d8-a328-bfc157ffcdde\") " Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.824983 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f1c4aab-5848-4313-a0b0-b95d2a2f660d-operator-scripts\") pod \"1f1c4aab-5848-4313-a0b0-b95d2a2f660d\" (UID: \"1f1c4aab-5848-4313-a0b0-b95d2a2f660d\") " Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.825019 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f697b83-8112-40d8-a328-bfc157ffcdde-operator-scripts\") pod \"5f697b83-8112-40d8-a328-bfc157ffcdde\" (UID: \"5f697b83-8112-40d8-a328-bfc157ffcdde\") " Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.825680 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx72j\" (UniqueName: \"kubernetes.io/projected/8a610c00-47f7-4347-8890-f5392ae78555-kube-api-access-sx72j\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.825693 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a610c00-47f7-4347-8890-f5392ae78555-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.825706 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfc6f\" (UniqueName: \"kubernetes.io/projected/2f23d852-29f7-4a94-8fb2-05115f135ac3-kube-api-access-nfc6f\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.825715 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9f4m\" (UniqueName: \"kubernetes.io/projected/078d4918-1bcb-4825-a5e4-4a4130593668-kube-api-access-j9f4m\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.825723 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f23d852-29f7-4a94-8fb2-05115f135ac3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.825734 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/078d4918-1bcb-4825-a5e4-4a4130593668-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.827014 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f697b83-8112-40d8-a328-bfc157ffcdde-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f697b83-8112-40d8-a328-bfc157ffcdde" (UID: "5f697b83-8112-40d8-a328-bfc157ffcdde"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.828551 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f1c4aab-5848-4313-a0b0-b95d2a2f660d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f1c4aab-5848-4313-a0b0-b95d2a2f660d" (UID: "1f1c4aab-5848-4313-a0b0-b95d2a2f660d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.880912 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f697b83-8112-40d8-a328-bfc157ffcdde-kube-api-access-7zs7q" (OuterVolumeSpecName: "kube-api-access-7zs7q") pod "5f697b83-8112-40d8-a328-bfc157ffcdde" (UID: "5f697b83-8112-40d8-a328-bfc157ffcdde"). InnerVolumeSpecName "kube-api-access-7zs7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.886349 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f1c4aab-5848-4313-a0b0-b95d2a2f660d-kube-api-access-wcc5q" (OuterVolumeSpecName: "kube-api-access-wcc5q") pod "1f1c4aab-5848-4313-a0b0-b95d2a2f660d" (UID: "1f1c4aab-5848-4313-a0b0-b95d2a2f660d"). InnerVolumeSpecName "kube-api-access-wcc5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.894198 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-725c-account-create-update-ncz7n" event={"ID":"5f697b83-8112-40d8-a328-bfc157ffcdde","Type":"ContainerDied","Data":"df98b24b6bd6ed157ed6116024ec0d97b9f742e5eba0316ed449d02b92b0a079"} Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.894255 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df98b24b6bd6ed157ed6116024ec0d97b9f742e5eba0316ed449d02b92b0a079" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.894358 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-725c-account-create-update-ncz7n" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.907238 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q64zj" event={"ID":"1f1c4aab-5848-4313-a0b0-b95d2a2f660d","Type":"ContainerDied","Data":"0dd9c0905619b4083a23ffc9de39c59106b4458d944ff39b20b1f3032f350b15"} Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.907276 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dd9c0905619b4083a23ffc9de39c59106b4458d944ff39b20b1f3032f350b15" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.907361 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q64zj" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.920422 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ctfhf" event={"ID":"078d4918-1bcb-4825-a5e4-4a4130593668","Type":"ContainerDied","Data":"3691dd785a50ab0c160456cf9afc58f08442a07b4eeb40bcb42e56af88c9795c"} Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.920462 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3691dd785a50ab0c160456cf9afc58f08442a07b4eeb40bcb42e56af88c9795c" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.920569 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ctfhf" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.928553 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zs7q\" (UniqueName: \"kubernetes.io/projected/5f697b83-8112-40d8-a328-bfc157ffcdde-kube-api-access-7zs7q\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.928586 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f1c4aab-5848-4313-a0b0-b95d2a2f660d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.928596 4711 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f697b83-8112-40d8-a328-bfc157ffcdde-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.928605 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcc5q\" (UniqueName: \"kubernetes.io/projected/1f1c4aab-5848-4313-a0b0-b95d2a2f660d-kube-api-access-wcc5q\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.935475 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-845d-account-create-update-26jgv" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.935610 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-845d-account-create-update-26jgv" event={"ID":"e05d72c3-0be9-40ae-aa5d-f24c5b02adbf","Type":"ContainerDied","Data":"4873a866b841a1a0754e757cce2aacc1605462039dd96377720641136eaba753"} Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.935640 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4873a866b841a1a0754e757cce2aacc1605462039dd96377720641136eaba753" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.937688 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" event={"ID":"2f23d852-29f7-4a94-8fb2-05115f135ac3","Type":"ContainerDied","Data":"dec6f3f72a6833dfc75e08e3acfa2f22351705e31707c5498445a7bd57c91fdc"} Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.937720 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dec6f3f72a6833dfc75e08e3acfa2f22351705e31707c5498445a7bd57c91fdc" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.937755 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6e60-account-create-update-mfx7j" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.945791 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wghts" event={"ID":"8a610c00-47f7-4347-8890-f5392ae78555","Type":"ContainerDied","Data":"fd60526718c876188e971c7b5717e396046d319288757d0eeae0e106a0fb6859"} Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.945831 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd60526718c876188e971c7b5717e396046d319288757d0eeae0e106a0fb6859" Dec 02 10:34:07 crc kubenswrapper[4711]: I1202 10:34:07.945902 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wghts" Dec 02 10:34:08 crc kubenswrapper[4711]: I1202 10:34:08.957338 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbd96160-d98f-4ec2-ab12-dab3a57e65ae","Type":"ContainerStarted","Data":"c192f55eac5db6645e53c915ab60c102833c57995e6bf5f3f4027f816a4cf1f7"} Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.019586 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rsnn9"] Dec 02 10:34:09 crc kubenswrapper[4711]: E1202 10:34:09.020057 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f23d852-29f7-4a94-8fb2-05115f135ac3" containerName="mariadb-account-create-update" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020079 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f23d852-29f7-4a94-8fb2-05115f135ac3" containerName="mariadb-account-create-update" Dec 02 10:34:09 crc kubenswrapper[4711]: E1202 10:34:09.020098 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerName="horizon" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020107 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerName="horizon" Dec 02 10:34:09 crc kubenswrapper[4711]: E1202 10:34:09.020115 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078d4918-1bcb-4825-a5e4-4a4130593668" containerName="mariadb-database-create" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020124 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="078d4918-1bcb-4825-a5e4-4a4130593668" containerName="mariadb-database-create" Dec 02 10:34:09 crc kubenswrapper[4711]: E1202 10:34:09.020141 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f697b83-8112-40d8-a328-bfc157ffcdde" containerName="mariadb-account-create-update" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020149 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f697b83-8112-40d8-a328-bfc157ffcdde" containerName="mariadb-account-create-update" Dec 02 10:34:09 crc kubenswrapper[4711]: E1202 10:34:09.020164 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05d72c3-0be9-40ae-aa5d-f24c5b02adbf" containerName="mariadb-account-create-update" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020174 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05d72c3-0be9-40ae-aa5d-f24c5b02adbf" containerName="mariadb-account-create-update" Dec 02 10:34:09 crc kubenswrapper[4711]: E1202 10:34:09.020191 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerName="horizon-log" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020200 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerName="horizon-log" Dec 02 10:34:09 crc kubenswrapper[4711]: E1202 10:34:09.020212 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a610c00-47f7-4347-8890-f5392ae78555" containerName="mariadb-database-create" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020219 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a610c00-47f7-4347-8890-f5392ae78555" containerName="mariadb-database-create" Dec 02 10:34:09 crc kubenswrapper[4711]: E1202 10:34:09.020259 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1c4aab-5848-4313-a0b0-b95d2a2f660d" containerName="mariadb-database-create" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020268 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1c4aab-5848-4313-a0b0-b95d2a2f660d" containerName="mariadb-database-create" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020448 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a610c00-47f7-4347-8890-f5392ae78555" containerName="mariadb-database-create" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020465 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerName="horizon" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020477 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="e05d72c3-0be9-40ae-aa5d-f24c5b02adbf" containerName="mariadb-account-create-update" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020501 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f697b83-8112-40d8-a328-bfc157ffcdde" containerName="mariadb-account-create-update" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020517 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f1c4aab-5848-4313-a0b0-b95d2a2f660d" containerName="mariadb-database-create" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020530 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="078d4918-1bcb-4825-a5e4-4a4130593668" containerName="mariadb-database-create" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020542 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" containerName="horizon-log" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.020556 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f23d852-29f7-4a94-8fb2-05115f135ac3" containerName="mariadb-account-create-update" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.021343 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.025655 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.025655 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vlcc5" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.028401 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.038170 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rsnn9"] Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.088496 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9435ea7-574e-4a04-ad38-aa7a1cd82ebd" path="/var/lib/kubelet/pods/b9435ea7-574e-4a04-ad38-aa7a1cd82ebd/volumes" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.148742 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbpgf\" (UniqueName: \"kubernetes.io/projected/6c592ed4-2527-4546-8500-5dfc26ee5dca-kube-api-access-hbpgf\") pod \"nova-cell0-conductor-db-sync-rsnn9\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.149210 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rsnn9\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.149425 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-scripts\") pod \"nova-cell0-conductor-db-sync-rsnn9\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.149689 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-config-data\") pod \"nova-cell0-conductor-db-sync-rsnn9\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.251242 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rsnn9\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.251444 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-scripts\") pod \"nova-cell0-conductor-db-sync-rsnn9\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.251940 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-config-data\") pod \"nova-cell0-conductor-db-sync-rsnn9\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.252031 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbpgf\" (UniqueName: \"kubernetes.io/projected/6c592ed4-2527-4546-8500-5dfc26ee5dca-kube-api-access-hbpgf\") pod \"nova-cell0-conductor-db-sync-rsnn9\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.258649 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-scripts\") pod \"nova-cell0-conductor-db-sync-rsnn9\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.260355 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-config-data\") pod \"nova-cell0-conductor-db-sync-rsnn9\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.263548 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rsnn9\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.269504 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbpgf\" (UniqueName: \"kubernetes.io/projected/6c592ed4-2527-4546-8500-5dfc26ee5dca-kube-api-access-hbpgf\") pod \"nova-cell0-conductor-db-sync-rsnn9\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.340071 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.797333 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rsnn9"] Dec 02 10:34:09 crc kubenswrapper[4711]: W1202 10:34:09.807162 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c592ed4_2527_4546_8500_5dfc26ee5dca.slice/crio-9d24c8d9e923b3ba125d94bec9b833013a10df834b3c83791d1e7cb0531d83f3 WatchSource:0}: Error finding container 9d24c8d9e923b3ba125d94bec9b833013a10df834b3c83791d1e7cb0531d83f3: Status 404 returned error can't find the container with id 9d24c8d9e923b3ba125d94bec9b833013a10df834b3c83791d1e7cb0531d83f3 Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.967039 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rsnn9" event={"ID":"6c592ed4-2527-4546-8500-5dfc26ee5dca","Type":"ContainerStarted","Data":"9d24c8d9e923b3ba125d94bec9b833013a10df834b3c83791d1e7cb0531d83f3"} Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.969920 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbd96160-d98f-4ec2-ab12-dab3a57e65ae","Type":"ContainerStarted","Data":"1df66de200a691cc2c8b9c8735175e4db13c3038339de1e6ef7b00dd5de6b680"} Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.970107 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="ceilometer-central-agent" containerID="cri-o://a5b39f4039b1eac3405a5b520eb836774eca40606d6a548a99ba6e1eea243834" gracePeriod=30 Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.970163 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.970160 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="proxy-httpd" containerID="cri-o://1df66de200a691cc2c8b9c8735175e4db13c3038339de1e6ef7b00dd5de6b680" gracePeriod=30 Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.970213 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="ceilometer-notification-agent" containerID="cri-o://cb3360c950750ad7855b1aea2bbf1b6ab2fe6f3315077351cce265ce60987d8d" gracePeriod=30 Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.970394 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="sg-core" containerID="cri-o://c192f55eac5db6645e53c915ab60c102833c57995e6bf5f3f4027f816a4cf1f7" gracePeriod=30 Dec 02 10:34:09 crc kubenswrapper[4711]: I1202 10:34:09.999854 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.604138746 podStartE2EDuration="6.999835952s" podCreationTimestamp="2025-12-02 10:34:03 +0000 UTC" firstStartedPulling="2025-12-02 10:34:04.922004141 +0000 UTC m=+1234.631370588" lastFinishedPulling="2025-12-02 10:34:09.317701347 +0000 UTC m=+1239.027067794" observedRunningTime="2025-12-02 10:34:09.996202913 +0000 UTC m=+1239.705569370" watchObservedRunningTime="2025-12-02 10:34:09.999835952 +0000 UTC m=+1239.709202399" Dec 02 10:34:10 crc kubenswrapper[4711]: I1202 10:34:10.993720 4711 generic.go:334] "Generic (PLEG): container finished" podID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerID="1df66de200a691cc2c8b9c8735175e4db13c3038339de1e6ef7b00dd5de6b680" exitCode=0 Dec 02 10:34:10 crc kubenswrapper[4711]: I1202 10:34:10.993969 4711 generic.go:334] "Generic (PLEG): container finished" podID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerID="c192f55eac5db6645e53c915ab60c102833c57995e6bf5f3f4027f816a4cf1f7" exitCode=2 Dec 02 10:34:10 crc kubenswrapper[4711]: I1202 10:34:10.993983 4711 generic.go:334] "Generic (PLEG): container finished" podID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerID="cb3360c950750ad7855b1aea2bbf1b6ab2fe6f3315077351cce265ce60987d8d" exitCode=0 Dec 02 10:34:10 crc kubenswrapper[4711]: I1202 10:34:10.993789 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbd96160-d98f-4ec2-ab12-dab3a57e65ae","Type":"ContainerDied","Data":"1df66de200a691cc2c8b9c8735175e4db13c3038339de1e6ef7b00dd5de6b680"} Dec 02 10:34:10 crc kubenswrapper[4711]: I1202 10:34:10.994059 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbd96160-d98f-4ec2-ab12-dab3a57e65ae","Type":"ContainerDied","Data":"c192f55eac5db6645e53c915ab60c102833c57995e6bf5f3f4027f816a4cf1f7"} Dec 02 10:34:10 crc kubenswrapper[4711]: I1202 10:34:10.994078 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbd96160-d98f-4ec2-ab12-dab3a57e65ae","Type":"ContainerDied","Data":"cb3360c950750ad7855b1aea2bbf1b6ab2fe6f3315077351cce265ce60987d8d"} Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.053403 4711 generic.go:334] "Generic (PLEG): container finished" podID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerID="a5b39f4039b1eac3405a5b520eb836774eca40606d6a548a99ba6e1eea243834" exitCode=0 Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.053466 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbd96160-d98f-4ec2-ab12-dab3a57e65ae","Type":"ContainerDied","Data":"a5b39f4039b1eac3405a5b520eb836774eca40606d6a548a99ba6e1eea243834"} Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.661572 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.682882 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-run-httpd\") pod \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.682966 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-config-data\") pod \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.683012 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-log-httpd\") pod \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.683055 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-combined-ca-bundle\") pod \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.683140 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7p5c\" (UniqueName: \"kubernetes.io/projected/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-kube-api-access-v7p5c\") pod \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.683211 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-scripts\") pod \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.683236 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-sg-core-conf-yaml\") pod \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\" (UID: \"cbd96160-d98f-4ec2-ab12-dab3a57e65ae\") " Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.685695 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cbd96160-d98f-4ec2-ab12-dab3a57e65ae" (UID: "cbd96160-d98f-4ec2-ab12-dab3a57e65ae"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.685720 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cbd96160-d98f-4ec2-ab12-dab3a57e65ae" (UID: "cbd96160-d98f-4ec2-ab12-dab3a57e65ae"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.692812 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-kube-api-access-v7p5c" (OuterVolumeSpecName: "kube-api-access-v7p5c") pod "cbd96160-d98f-4ec2-ab12-dab3a57e65ae" (UID: "cbd96160-d98f-4ec2-ab12-dab3a57e65ae"). InnerVolumeSpecName "kube-api-access-v7p5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.713176 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-scripts" (OuterVolumeSpecName: "scripts") pod "cbd96160-d98f-4ec2-ab12-dab3a57e65ae" (UID: "cbd96160-d98f-4ec2-ab12-dab3a57e65ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.754833 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cbd96160-d98f-4ec2-ab12-dab3a57e65ae" (UID: "cbd96160-d98f-4ec2-ab12-dab3a57e65ae"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.786384 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7p5c\" (UniqueName: \"kubernetes.io/projected/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-kube-api-access-v7p5c\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.786417 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.786425 4711 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.786432 4711 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.786440 4711 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.819125 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbd96160-d98f-4ec2-ab12-dab3a57e65ae" (UID: "cbd96160-d98f-4ec2-ab12-dab3a57e65ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.863987 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-config-data" (OuterVolumeSpecName: "config-data") pod "cbd96160-d98f-4ec2-ab12-dab3a57e65ae" (UID: "cbd96160-d98f-4ec2-ab12-dab3a57e65ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.887816 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:16 crc kubenswrapper[4711]: I1202 10:34:16.888160 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd96160-d98f-4ec2-ab12-dab3a57e65ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.067142 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbd96160-d98f-4ec2-ab12-dab3a57e65ae","Type":"ContainerDied","Data":"f4dd0b2345892750126631b5f84bb961eda03f7cec9509d846e47e70049d3c82"} Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.067225 4711 scope.go:117] "RemoveContainer" containerID="1df66de200a691cc2c8b9c8735175e4db13c3038339de1e6ef7b00dd5de6b680" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.067582 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.072023 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rsnn9" event={"ID":"6c592ed4-2527-4546-8500-5dfc26ee5dca","Type":"ContainerStarted","Data":"e7582976081adb5b24e6a466af25e03646af7b26ab294f9be45b6a7c1d61bc48"} Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.089547 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rsnn9" podStartSLOduration=2.217096253 podStartE2EDuration="9.089514472s" podCreationTimestamp="2025-12-02 10:34:08 +0000 UTC" firstStartedPulling="2025-12-02 10:34:09.809563783 +0000 UTC m=+1239.518930230" lastFinishedPulling="2025-12-02 10:34:16.681982002 +0000 UTC m=+1246.391348449" observedRunningTime="2025-12-02 10:34:17.088288499 +0000 UTC m=+1246.797654946" watchObservedRunningTime="2025-12-02 10:34:17.089514472 +0000 UTC m=+1246.798880929" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.109489 4711 scope.go:117] "RemoveContainer" containerID="c192f55eac5db6645e53c915ab60c102833c57995e6bf5f3f4027f816a4cf1f7" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.125316 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.133388 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.135220 4711 scope.go:117] "RemoveContainer" containerID="cb3360c950750ad7855b1aea2bbf1b6ab2fe6f3315077351cce265ce60987d8d" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.165190 4711 scope.go:117] "RemoveContainer" containerID="a5b39f4039b1eac3405a5b520eb836774eca40606d6a548a99ba6e1eea243834" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.166506 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:17 crc kubenswrapper[4711]: E1202 10:34:17.167034 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="ceilometer-notification-agent" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.167071 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="ceilometer-notification-agent" Dec 02 10:34:17 crc kubenswrapper[4711]: E1202 10:34:17.167090 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="proxy-httpd" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.167099 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="proxy-httpd" Dec 02 10:34:17 crc kubenswrapper[4711]: E1202 10:34:17.167473 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="sg-core" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.167499 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="sg-core" Dec 02 10:34:17 crc kubenswrapper[4711]: E1202 10:34:17.167514 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="ceilometer-central-agent" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.167523 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="ceilometer-central-agent" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.167747 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="proxy-httpd" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.167777 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="sg-core" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.167802 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="ceilometer-notification-agent" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.167821 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" containerName="ceilometer-central-agent" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.171056 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.177964 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.178208 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.193673 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.194074 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.194103 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-scripts\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.194161 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-log-httpd\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.194196 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-run-httpd\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.194223 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26gn\" (UniqueName: \"kubernetes.io/projected/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-kube-api-access-n26gn\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.194290 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-config-data\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.194312 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.296583 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-scripts\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.296627 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.296696 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-log-httpd\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.296725 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-run-httpd\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.296763 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n26gn\" (UniqueName: \"kubernetes.io/projected/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-kube-api-access-n26gn\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.296809 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-config-data\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.296832 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.297880 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-run-httpd\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.298368 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-log-httpd\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.302285 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.302491 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.303733 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-config-data\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.304342 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-scripts\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.322221 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n26gn\" (UniqueName: \"kubernetes.io/projected/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-kube-api-access-n26gn\") pod \"ceilometer-0\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " pod="openstack/ceilometer-0" Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.357431 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.357858 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e10ea6af-6f3d-468b-be7c-80e79fb0d899" containerName="glance-log" containerID="cri-o://f0894e3708f5e05cad43db87fe46de58995ed3905fec3957c2ca884be2d0545c" gracePeriod=30 Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.357931 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e10ea6af-6f3d-468b-be7c-80e79fb0d899" containerName="glance-httpd" containerID="cri-o://6c916be1505ac21d2991032d59d831754d806cb475012bfd7ab4b2a0ca87f928" gracePeriod=30 Dec 02 10:34:17 crc kubenswrapper[4711]: I1202 10:34:17.554389 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:34:18 crc kubenswrapper[4711]: I1202 10:34:18.029221 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:18 crc kubenswrapper[4711]: W1202 10:34:18.044125 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb51a5ba_b266_4df0_91df_2cb1b32de7a2.slice/crio-1f34ea87069397dd69a9082c13bbf63b98f9112ea662d2f1f3b94f8f0f914578 WatchSource:0}: Error finding container 1f34ea87069397dd69a9082c13bbf63b98f9112ea662d2f1f3b94f8f0f914578: Status 404 returned error can't find the container with id 1f34ea87069397dd69a9082c13bbf63b98f9112ea662d2f1f3b94f8f0f914578 Dec 02 10:34:18 crc kubenswrapper[4711]: I1202 10:34:18.089314 4711 generic.go:334] "Generic (PLEG): container finished" podID="e10ea6af-6f3d-468b-be7c-80e79fb0d899" containerID="f0894e3708f5e05cad43db87fe46de58995ed3905fec3957c2ca884be2d0545c" exitCode=143 Dec 02 10:34:18 crc kubenswrapper[4711]: I1202 10:34:18.089388 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e10ea6af-6f3d-468b-be7c-80e79fb0d899","Type":"ContainerDied","Data":"f0894e3708f5e05cad43db87fe46de58995ed3905fec3957c2ca884be2d0545c"} Dec 02 10:34:18 crc kubenswrapper[4711]: I1202 10:34:18.090771 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb51a5ba-b266-4df0-91df-2cb1b32de7a2","Type":"ContainerStarted","Data":"1f34ea87069397dd69a9082c13bbf63b98f9112ea662d2f1f3b94f8f0f914578"} Dec 02 10:34:19 crc kubenswrapper[4711]: I1202 10:34:19.026040 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:19 crc kubenswrapper[4711]: I1202 10:34:19.088804 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd96160-d98f-4ec2-ab12-dab3a57e65ae" path="/var/lib/kubelet/pods/cbd96160-d98f-4ec2-ab12-dab3a57e65ae/volumes" Dec 02 10:34:19 crc kubenswrapper[4711]: I1202 10:34:19.101110 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb51a5ba-b266-4df0-91df-2cb1b32de7a2","Type":"ContainerStarted","Data":"1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411"} Dec 02 10:34:21 crc kubenswrapper[4711]: I1202 10:34:21.144766 4711 generic.go:334] "Generic (PLEG): container finished" podID="e10ea6af-6f3d-468b-be7c-80e79fb0d899" containerID="6c916be1505ac21d2991032d59d831754d806cb475012bfd7ab4b2a0ca87f928" exitCode=0 Dec 02 10:34:21 crc kubenswrapper[4711]: I1202 10:34:21.144843 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e10ea6af-6f3d-468b-be7c-80e79fb0d899","Type":"ContainerDied","Data":"6c916be1505ac21d2991032d59d831754d806cb475012bfd7ab4b2a0ca87f928"} Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.155024 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.155340 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e10ea6af-6f3d-468b-be7c-80e79fb0d899","Type":"ContainerDied","Data":"60e642d9bcea417243f93c0d627808b200fdc0e86055d38d830ff53c73104078"} Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.155780 4711 scope.go:117] "RemoveContainer" containerID="6c916be1505ac21d2991032d59d831754d806cb475012bfd7ab4b2a0ca87f928" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.158737 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb51a5ba-b266-4df0-91df-2cb1b32de7a2","Type":"ContainerStarted","Data":"4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca"} Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.198469 4711 scope.go:117] "RemoveContainer" containerID="f0894e3708f5e05cad43db87fe46de58995ed3905fec3957c2ca884be2d0545c" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.199070 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e10ea6af-6f3d-468b-be7c-80e79fb0d899-httpd-run\") pod \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.199230 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-combined-ca-bundle\") pod \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.199259 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10ea6af-6f3d-468b-be7c-80e79fb0d899-logs\") pod \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.199300 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-config-data\") pod \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.199319 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-internal-tls-certs\") pod \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.199374 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-scripts\") pod \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.199390 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2xq8\" (UniqueName: \"kubernetes.io/projected/e10ea6af-6f3d-468b-be7c-80e79fb0d899-kube-api-access-b2xq8\") pod \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.199444 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\" (UID: \"e10ea6af-6f3d-468b-be7c-80e79fb0d899\") " Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.200313 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e10ea6af-6f3d-468b-be7c-80e79fb0d899-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e10ea6af-6f3d-468b-be7c-80e79fb0d899" (UID: "e10ea6af-6f3d-468b-be7c-80e79fb0d899"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.201109 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e10ea6af-6f3d-468b-be7c-80e79fb0d899-logs" (OuterVolumeSpecName: "logs") pod "e10ea6af-6f3d-468b-be7c-80e79fb0d899" (UID: "e10ea6af-6f3d-468b-be7c-80e79fb0d899"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.211315 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-scripts" (OuterVolumeSpecName: "scripts") pod "e10ea6af-6f3d-468b-be7c-80e79fb0d899" (UID: "e10ea6af-6f3d-468b-be7c-80e79fb0d899"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.217183 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "e10ea6af-6f3d-468b-be7c-80e79fb0d899" (UID: "e10ea6af-6f3d-468b-be7c-80e79fb0d899"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.219865 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e10ea6af-6f3d-468b-be7c-80e79fb0d899-kube-api-access-b2xq8" (OuterVolumeSpecName: "kube-api-access-b2xq8") pod "e10ea6af-6f3d-468b-be7c-80e79fb0d899" (UID: "e10ea6af-6f3d-468b-be7c-80e79fb0d899"). InnerVolumeSpecName "kube-api-access-b2xq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.272147 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e10ea6af-6f3d-468b-be7c-80e79fb0d899" (UID: "e10ea6af-6f3d-468b-be7c-80e79fb0d899"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.274777 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-config-data" (OuterVolumeSpecName: "config-data") pod "e10ea6af-6f3d-468b-be7c-80e79fb0d899" (UID: "e10ea6af-6f3d-468b-be7c-80e79fb0d899"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.288580 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e10ea6af-6f3d-468b-be7c-80e79fb0d899" (UID: "e10ea6af-6f3d-468b-be7c-80e79fb0d899"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.301151 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.301186 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2xq8\" (UniqueName: \"kubernetes.io/projected/e10ea6af-6f3d-468b-be7c-80e79fb0d899-kube-api-access-b2xq8\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.301258 4711 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.301271 4711 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e10ea6af-6f3d-468b-be7c-80e79fb0d899-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.301282 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.301292 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.301302 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10ea6af-6f3d-468b-be7c-80e79fb0d899-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.301312 4711 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e10ea6af-6f3d-468b-be7c-80e79fb0d899-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.328109 4711 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 02 10:34:22 crc kubenswrapper[4711]: I1202 10:34:22.402577 4711 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.169357 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.190454 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.203826 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.235119 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:34:23 crc kubenswrapper[4711]: E1202 10:34:23.235566 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10ea6af-6f3d-468b-be7c-80e79fb0d899" containerName="glance-log" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.235578 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10ea6af-6f3d-468b-be7c-80e79fb0d899" containerName="glance-log" Dec 02 10:34:23 crc kubenswrapper[4711]: E1202 10:34:23.235591 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10ea6af-6f3d-468b-be7c-80e79fb0d899" containerName="glance-httpd" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.235598 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10ea6af-6f3d-468b-be7c-80e79fb0d899" containerName="glance-httpd" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.235764 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10ea6af-6f3d-468b-be7c-80e79fb0d899" containerName="glance-log" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.235792 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10ea6af-6f3d-468b-be7c-80e79fb0d899" containerName="glance-httpd" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.236747 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.249447 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.256374 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.256720 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.322446 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9a0c110-a808-440d-ad76-4c1b193f3543-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.322495 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.322552 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9a0c110-a808-440d-ad76-4c1b193f3543-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.322620 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vhww\" (UniqueName: \"kubernetes.io/projected/c9a0c110-a808-440d-ad76-4c1b193f3543-kube-api-access-8vhww\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.322651 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a0c110-a808-440d-ad76-4c1b193f3543-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.322687 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a0c110-a808-440d-ad76-4c1b193f3543-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.322704 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a0c110-a808-440d-ad76-4c1b193f3543-logs\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.322723 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a0c110-a808-440d-ad76-4c1b193f3543-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.424781 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a0c110-a808-440d-ad76-4c1b193f3543-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.424825 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a0c110-a808-440d-ad76-4c1b193f3543-logs\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.424895 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a0c110-a808-440d-ad76-4c1b193f3543-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.425371 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a0c110-a808-440d-ad76-4c1b193f3543-logs\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.425579 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9a0c110-a808-440d-ad76-4c1b193f3543-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.425630 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.425914 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9a0c110-a808-440d-ad76-4c1b193f3543-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.426096 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vhww\" (UniqueName: \"kubernetes.io/projected/c9a0c110-a808-440d-ad76-4c1b193f3543-kube-api-access-8vhww\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.426159 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a0c110-a808-440d-ad76-4c1b193f3543-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.426092 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.426216 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9a0c110-a808-440d-ad76-4c1b193f3543-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.431306 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a0c110-a808-440d-ad76-4c1b193f3543-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.432272 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a0c110-a808-440d-ad76-4c1b193f3543-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.433266 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9a0c110-a808-440d-ad76-4c1b193f3543-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.434208 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a0c110-a808-440d-ad76-4c1b193f3543-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.451167 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vhww\" (UniqueName: \"kubernetes.io/projected/c9a0c110-a808-440d-ad76-4c1b193f3543-kube-api-access-8vhww\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.472788 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c9a0c110-a808-440d-ad76-4c1b193f3543\") " pod="openstack/glance-default-internal-api-0" Dec 02 10:34:23 crc kubenswrapper[4711]: I1202 10:34:23.584540 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:24 crc kubenswrapper[4711]: I1202 10:34:24.166902 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 10:34:24 crc kubenswrapper[4711]: W1202 10:34:24.181973 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a0c110_a808_440d_ad76_4c1b193f3543.slice/crio-44dfdeb34049b327e6fac7d6c8c4bbc32f3f3128d4239ba60fa99c101e1705f5 WatchSource:0}: Error finding container 44dfdeb34049b327e6fac7d6c8c4bbc32f3f3128d4239ba60fa99c101e1705f5: Status 404 returned error can't find the container with id 44dfdeb34049b327e6fac7d6c8c4bbc32f3f3128d4239ba60fa99c101e1705f5 Dec 02 10:34:24 crc kubenswrapper[4711]: I1202 10:34:24.187795 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb51a5ba-b266-4df0-91df-2cb1b32de7a2","Type":"ContainerStarted","Data":"c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb"} Dec 02 10:34:25 crc kubenswrapper[4711]: I1202 10:34:25.095548 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e10ea6af-6f3d-468b-be7c-80e79fb0d899" path="/var/lib/kubelet/pods/e10ea6af-6f3d-468b-be7c-80e79fb0d899/volumes" Dec 02 10:34:25 crc kubenswrapper[4711]: I1202 10:34:25.214100 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9a0c110-a808-440d-ad76-4c1b193f3543","Type":"ContainerStarted","Data":"24c6e2404660d52d04a9a58cf7a9d475c87e4b75150ba77f253a12ff69d3af54"} Dec 02 10:34:25 crc kubenswrapper[4711]: I1202 10:34:25.214152 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9a0c110-a808-440d-ad76-4c1b193f3543","Type":"ContainerStarted","Data":"44dfdeb34049b327e6fac7d6c8c4bbc32f3f3128d4239ba60fa99c101e1705f5"} Dec 02 10:34:26 crc kubenswrapper[4711]: I1202 10:34:26.228283 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb51a5ba-b266-4df0-91df-2cb1b32de7a2","Type":"ContainerStarted","Data":"3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf"} Dec 02 10:34:26 crc kubenswrapper[4711]: I1202 10:34:26.228352 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="ceilometer-central-agent" containerID="cri-o://1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411" gracePeriod=30 Dec 02 10:34:26 crc kubenswrapper[4711]: I1202 10:34:26.228548 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="ceilometer-notification-agent" containerID="cri-o://4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca" gracePeriod=30 Dec 02 10:34:26 crc kubenswrapper[4711]: I1202 10:34:26.228501 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="sg-core" containerID="cri-o://c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb" gracePeriod=30 Dec 02 10:34:26 crc kubenswrapper[4711]: I1202 10:34:26.228608 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="proxy-httpd" containerID="cri-o://3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf" gracePeriod=30 Dec 02 10:34:26 crc kubenswrapper[4711]: I1202 10:34:26.229435 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:34:26 crc kubenswrapper[4711]: I1202 10:34:26.231766 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9a0c110-a808-440d-ad76-4c1b193f3543","Type":"ContainerStarted","Data":"c006281435689d35640a4896aaddc72dd43bd3635e6a80ecb80105793e85b6b2"} Dec 02 10:34:26 crc kubenswrapper[4711]: I1202 10:34:26.290602 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.29058066 podStartE2EDuration="3.29058066s" podCreationTimestamp="2025-12-02 10:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:26.288323988 +0000 UTC m=+1255.997690445" watchObservedRunningTime="2025-12-02 10:34:26.29058066 +0000 UTC m=+1255.999947117" Dec 02 10:34:26 crc kubenswrapper[4711]: I1202 10:34:26.296567 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.21862225 podStartE2EDuration="9.296547911s" podCreationTimestamp="2025-12-02 10:34:17 +0000 UTC" firstStartedPulling="2025-12-02 10:34:18.046806348 +0000 UTC m=+1247.756172795" lastFinishedPulling="2025-12-02 10:34:25.124732009 +0000 UTC m=+1254.834098456" observedRunningTime="2025-12-02 10:34:26.261383768 +0000 UTC m=+1255.970750285" watchObservedRunningTime="2025-12-02 10:34:26.296547911 +0000 UTC m=+1256.005914368" Dec 02 10:34:27 crc kubenswrapper[4711]: I1202 10:34:27.244025 4711 generic.go:334] "Generic (PLEG): container finished" podID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerID="3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf" exitCode=0 Dec 02 10:34:27 crc kubenswrapper[4711]: I1202 10:34:27.244066 4711 generic.go:334] "Generic (PLEG): container finished" podID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerID="c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb" exitCode=2 Dec 02 10:34:27 crc kubenswrapper[4711]: I1202 10:34:27.244079 4711 generic.go:334] "Generic (PLEG): container finished" podID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerID="4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca" exitCode=0 Dec 02 10:34:27 crc kubenswrapper[4711]: I1202 10:34:27.244120 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb51a5ba-b266-4df0-91df-2cb1b32de7a2","Type":"ContainerDied","Data":"3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf"} Dec 02 10:34:27 crc kubenswrapper[4711]: I1202 10:34:27.244187 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb51a5ba-b266-4df0-91df-2cb1b32de7a2","Type":"ContainerDied","Data":"c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb"} Dec 02 10:34:27 crc kubenswrapper[4711]: I1202 10:34:27.244205 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb51a5ba-b266-4df0-91df-2cb1b32de7a2","Type":"ContainerDied","Data":"4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca"} Dec 02 10:34:27 crc kubenswrapper[4711]: I1202 10:34:27.258781 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:34:27 crc kubenswrapper[4711]: I1202 10:34:27.259069 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7bbf095c-359d-4e14-95e8-d75e57a7f7c2" containerName="glance-log" containerID="cri-o://948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb" gracePeriod=30 Dec 02 10:34:27 crc kubenswrapper[4711]: I1202 10:34:27.259216 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7bbf095c-359d-4e14-95e8-d75e57a7f7c2" containerName="glance-httpd" containerID="cri-o://d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076" gracePeriod=30 Dec 02 10:34:28 crc kubenswrapper[4711]: I1202 10:34:28.255518 4711 generic.go:334] "Generic (PLEG): container finished" podID="7bbf095c-359d-4e14-95e8-d75e57a7f7c2" containerID="948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb" exitCode=143 Dec 02 10:34:28 crc kubenswrapper[4711]: I1202 10:34:28.255623 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7bbf095c-359d-4e14-95e8-d75e57a7f7c2","Type":"ContainerDied","Data":"948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb"} Dec 02 10:34:29 crc kubenswrapper[4711]: I1202 10:34:29.923773 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:34:29 crc kubenswrapper[4711]: I1202 10:34:29.969752 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-scripts\") pod \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " Dec 02 10:34:29 crc kubenswrapper[4711]: I1202 10:34:29.969809 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-combined-ca-bundle\") pod \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " Dec 02 10:34:29 crc kubenswrapper[4711]: I1202 10:34:29.969831 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-config-data\") pod \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " Dec 02 10:34:29 crc kubenswrapper[4711]: I1202 10:34:29.969888 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n26gn\" (UniqueName: \"kubernetes.io/projected/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-kube-api-access-n26gn\") pod \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " Dec 02 10:34:29 crc kubenswrapper[4711]: I1202 10:34:29.969933 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-run-httpd\") pod \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " Dec 02 10:34:29 crc kubenswrapper[4711]: I1202 10:34:29.969993 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-log-httpd\") pod \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " Dec 02 10:34:29 crc kubenswrapper[4711]: I1202 10:34:29.970062 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-sg-core-conf-yaml\") pod \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\" (UID: \"eb51a5ba-b266-4df0-91df-2cb1b32de7a2\") " Dec 02 10:34:29 crc kubenswrapper[4711]: I1202 10:34:29.971406 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eb51a5ba-b266-4df0-91df-2cb1b32de7a2" (UID: "eb51a5ba-b266-4df0-91df-2cb1b32de7a2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:29 crc kubenswrapper[4711]: I1202 10:34:29.972038 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eb51a5ba-b266-4df0-91df-2cb1b32de7a2" (UID: "eb51a5ba-b266-4df0-91df-2cb1b32de7a2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:29 crc kubenswrapper[4711]: I1202 10:34:29.977257 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-kube-api-access-n26gn" (OuterVolumeSpecName: "kube-api-access-n26gn") pod "eb51a5ba-b266-4df0-91df-2cb1b32de7a2" (UID: "eb51a5ba-b266-4df0-91df-2cb1b32de7a2"). InnerVolumeSpecName "kube-api-access-n26gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:29 crc kubenswrapper[4711]: I1202 10:34:29.977352 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-scripts" (OuterVolumeSpecName: "scripts") pod "eb51a5ba-b266-4df0-91df-2cb1b32de7a2" (UID: "eb51a5ba-b266-4df0-91df-2cb1b32de7a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.003639 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eb51a5ba-b266-4df0-91df-2cb1b32de7a2" (UID: "eb51a5ba-b266-4df0-91df-2cb1b32de7a2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.053398 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb51a5ba-b266-4df0-91df-2cb1b32de7a2" (UID: "eb51a5ba-b266-4df0-91df-2cb1b32de7a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.071831 4711 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.071853 4711 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.071861 4711 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.071869 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.071876 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.071885 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n26gn\" (UniqueName: \"kubernetes.io/projected/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-kube-api-access-n26gn\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.078334 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-config-data" (OuterVolumeSpecName: "config-data") pod "eb51a5ba-b266-4df0-91df-2cb1b32de7a2" (UID: "eb51a5ba-b266-4df0-91df-2cb1b32de7a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.173180 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb51a5ba-b266-4df0-91df-2cb1b32de7a2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.280031 4711 generic.go:334] "Generic (PLEG): container finished" podID="6c592ed4-2527-4546-8500-5dfc26ee5dca" containerID="e7582976081adb5b24e6a466af25e03646af7b26ab294f9be45b6a7c1d61bc48" exitCode=0 Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.280133 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rsnn9" event={"ID":"6c592ed4-2527-4546-8500-5dfc26ee5dca","Type":"ContainerDied","Data":"e7582976081adb5b24e6a466af25e03646af7b26ab294f9be45b6a7c1d61bc48"} Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.285795 4711 generic.go:334] "Generic (PLEG): container finished" podID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerID="1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411" exitCode=0 Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.285867 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb51a5ba-b266-4df0-91df-2cb1b32de7a2","Type":"ContainerDied","Data":"1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411"} Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.286004 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb51a5ba-b266-4df0-91df-2cb1b32de7a2","Type":"ContainerDied","Data":"1f34ea87069397dd69a9082c13bbf63b98f9112ea662d2f1f3b94f8f0f914578"} Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.286039 4711 scope.go:117] "RemoveContainer" containerID="3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.285911 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.338263 4711 scope.go:117] "RemoveContainer" containerID="c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.351415 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.371841 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.389755 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:30 crc kubenswrapper[4711]: E1202 10:34:30.390386 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="sg-core" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.390417 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="sg-core" Dec 02 10:34:30 crc kubenswrapper[4711]: E1202 10:34:30.390440 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="proxy-httpd" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.390452 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="proxy-httpd" Dec 02 10:34:30 crc kubenswrapper[4711]: E1202 10:34:30.390506 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="ceilometer-central-agent" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.390521 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="ceilometer-central-agent" Dec 02 10:34:30 crc kubenswrapper[4711]: E1202 10:34:30.390552 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="ceilometer-notification-agent" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.390563 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="ceilometer-notification-agent" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.390872 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="ceilometer-notification-agent" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.390906 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="ceilometer-central-agent" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.390923 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="proxy-httpd" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.390947 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" containerName="sg-core" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.393325 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.396004 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.397685 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.403610 4711 scope.go:117] "RemoveContainer" containerID="4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.436448 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.464049 4711 scope.go:117] "RemoveContainer" containerID="1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.478480 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v796r\" (UniqueName: \"kubernetes.io/projected/faca9a74-9df6-451f-b654-dfa3bfb71eda-kube-api-access-v796r\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.478521 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faca9a74-9df6-451f-b654-dfa3bfb71eda-run-httpd\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.478568 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faca9a74-9df6-451f-b654-dfa3bfb71eda-log-httpd\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.478598 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.478758 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-config-data\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.478820 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.478933 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-scripts\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.542485 4711 scope.go:117] "RemoveContainer" containerID="3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf" Dec 02 10:34:30 crc kubenswrapper[4711]: E1202 10:34:30.546697 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf\": container with ID starting with 3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf not found: ID does not exist" containerID="3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.546746 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf"} err="failed to get container status \"3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf\": rpc error: code = NotFound desc = could not find container \"3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf\": container with ID starting with 3f44a8bd96618ad73bd33c884acb9d8d23cd13d62a0f3a3e7a1a3993c45976bf not found: ID does not exist" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.546776 4711 scope.go:117] "RemoveContainer" containerID="c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb" Dec 02 10:34:30 crc kubenswrapper[4711]: E1202 10:34:30.547426 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb\": container with ID starting with c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb not found: ID does not exist" containerID="c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.547469 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb"} err="failed to get container status \"c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb\": rpc error: code = NotFound desc = could not find container \"c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb\": container with ID starting with c5ec6ffa25ef83f04983e023a07da2566b609d0d956efb0e1afbb2e1608c1beb not found: ID does not exist" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.547500 4711 scope.go:117] "RemoveContainer" containerID="4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca" Dec 02 10:34:30 crc kubenswrapper[4711]: E1202 10:34:30.547903 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca\": container with ID starting with 4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca not found: ID does not exist" containerID="4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.548041 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca"} err="failed to get container status \"4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca\": rpc error: code = NotFound desc = could not find container \"4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca\": container with ID starting with 4d24cb032302d3a00c6061019e81c45170dfbb53e225bd614d03bace723e79ca not found: ID does not exist" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.548079 4711 scope.go:117] "RemoveContainer" containerID="1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411" Dec 02 10:34:30 crc kubenswrapper[4711]: E1202 10:34:30.548537 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411\": container with ID starting with 1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411 not found: ID does not exist" containerID="1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.548560 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411"} err="failed to get container status \"1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411\": rpc error: code = NotFound desc = could not find container \"1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411\": container with ID starting with 1e137c8a55c259455733ddb690f86f95318610f95b00c98115ab880bbc0e6411 not found: ID does not exist" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.580683 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faca9a74-9df6-451f-b654-dfa3bfb71eda-log-httpd\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.580739 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.580761 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-config-data\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.580804 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.580837 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-scripts\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.580943 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v796r\" (UniqueName: \"kubernetes.io/projected/faca9a74-9df6-451f-b654-dfa3bfb71eda-kube-api-access-v796r\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.580977 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faca9a74-9df6-451f-b654-dfa3bfb71eda-run-httpd\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.581509 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faca9a74-9df6-451f-b654-dfa3bfb71eda-log-httpd\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.581840 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faca9a74-9df6-451f-b654-dfa3bfb71eda-run-httpd\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.586509 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-scripts\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.586660 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.587100 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-config-data\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.587471 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.597861 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v796r\" (UniqueName: \"kubernetes.io/projected/faca9a74-9df6-451f-b654-dfa3bfb71eda-kube-api-access-v796r\") pod \"ceilometer-0\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.758668 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.894855 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.986878 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-config-data\") pod \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.987335 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-logs\") pod \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.987403 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-scripts\") pod \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.987430 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-combined-ca-bundle\") pod \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.987467 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnprv\" (UniqueName: \"kubernetes.io/projected/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-kube-api-access-rnprv\") pod \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.987529 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.987552 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-httpd-run\") pod \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.987646 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-public-tls-certs\") pod \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\" (UID: \"7bbf095c-359d-4e14-95e8-d75e57a7f7c2\") " Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.988118 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-logs" (OuterVolumeSpecName: "logs") pod "7bbf095c-359d-4e14-95e8-d75e57a7f7c2" (UID: "7bbf095c-359d-4e14-95e8-d75e57a7f7c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.992980 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7bbf095c-359d-4e14-95e8-d75e57a7f7c2" (UID: "7bbf095c-359d-4e14-95e8-d75e57a7f7c2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.993364 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "7bbf095c-359d-4e14-95e8-d75e57a7f7c2" (UID: "7bbf095c-359d-4e14-95e8-d75e57a7f7c2"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.993549 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-kube-api-access-rnprv" (OuterVolumeSpecName: "kube-api-access-rnprv") pod "7bbf095c-359d-4e14-95e8-d75e57a7f7c2" (UID: "7bbf095c-359d-4e14-95e8-d75e57a7f7c2"). InnerVolumeSpecName "kube-api-access-rnprv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:30 crc kubenswrapper[4711]: I1202 10:34:30.995024 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-scripts" (OuterVolumeSpecName: "scripts") pod "7bbf095c-359d-4e14-95e8-d75e57a7f7c2" (UID: "7bbf095c-359d-4e14-95e8-d75e57a7f7c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.020657 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bbf095c-359d-4e14-95e8-d75e57a7f7c2" (UID: "7bbf095c-359d-4e14-95e8-d75e57a7f7c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.048864 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-config-data" (OuterVolumeSpecName: "config-data") pod "7bbf095c-359d-4e14-95e8-d75e57a7f7c2" (UID: "7bbf095c-359d-4e14-95e8-d75e57a7f7c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.064277 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7bbf095c-359d-4e14-95e8-d75e57a7f7c2" (UID: "7bbf095c-359d-4e14-95e8-d75e57a7f7c2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.089761 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.089791 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.089801 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.089812 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnprv\" (UniqueName: \"kubernetes.io/projected/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-kube-api-access-rnprv\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.089843 4711 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.089853 4711 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.089861 4711 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.089870 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbf095c-359d-4e14-95e8-d75e57a7f7c2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.100214 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb51a5ba-b266-4df0-91df-2cb1b32de7a2" path="/var/lib/kubelet/pods/eb51a5ba-b266-4df0-91df-2cb1b32de7a2/volumes" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.134742 4711 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.193710 4711 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.199853 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:34:31 crc kubenswrapper[4711]: W1202 10:34:31.213907 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaca9a74_9df6_451f_b654_dfa3bfb71eda.slice/crio-2e04efc3a82a9ce6239d2c64fa4971b736d4f1d13fa5db6c949b7d2dc75b2fdc WatchSource:0}: Error finding container 2e04efc3a82a9ce6239d2c64fa4971b736d4f1d13fa5db6c949b7d2dc75b2fdc: Status 404 returned error can't find the container with id 2e04efc3a82a9ce6239d2c64fa4971b736d4f1d13fa5db6c949b7d2dc75b2fdc Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.301055 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faca9a74-9df6-451f-b654-dfa3bfb71eda","Type":"ContainerStarted","Data":"2e04efc3a82a9ce6239d2c64fa4971b736d4f1d13fa5db6c949b7d2dc75b2fdc"} Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.303295 4711 generic.go:334] "Generic (PLEG): container finished" podID="7bbf095c-359d-4e14-95e8-d75e57a7f7c2" containerID="d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076" exitCode=0 Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.303352 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7bbf095c-359d-4e14-95e8-d75e57a7f7c2","Type":"ContainerDied","Data":"d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076"} Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.303395 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7bbf095c-359d-4e14-95e8-d75e57a7f7c2","Type":"ContainerDied","Data":"301ab809156b22229120bb0a606e7e4ac0cb377c2838c0b0ab3f3846b7c20bbb"} Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.303415 4711 scope.go:117] "RemoveContainer" containerID="d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.303574 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.331875 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.346927 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.352028 4711 scope.go:117] "RemoveContainer" containerID="948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.373042 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:34:31 crc kubenswrapper[4711]: E1202 10:34:31.373531 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbf095c-359d-4e14-95e8-d75e57a7f7c2" containerName="glance-httpd" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.373555 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbf095c-359d-4e14-95e8-d75e57a7f7c2" containerName="glance-httpd" Dec 02 10:34:31 crc kubenswrapper[4711]: E1202 10:34:31.373581 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbf095c-359d-4e14-95e8-d75e57a7f7c2" containerName="glance-log" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.373587 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbf095c-359d-4e14-95e8-d75e57a7f7c2" containerName="glance-log" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.373759 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bbf095c-359d-4e14-95e8-d75e57a7f7c2" containerName="glance-httpd" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.373781 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bbf095c-359d-4e14-95e8-d75e57a7f7c2" containerName="glance-log" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.374780 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.376910 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.377172 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.390825 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.404328 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kphvr\" (UniqueName: \"kubernetes.io/projected/9e688916-64de-415a-86d9-b54a42d3174d-kube-api-access-kphvr\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.404693 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.404769 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e688916-64de-415a-86d9-b54a42d3174d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.404892 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e688916-64de-415a-86d9-b54a42d3174d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.405014 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e688916-64de-415a-86d9-b54a42d3174d-logs\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.405090 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e688916-64de-415a-86d9-b54a42d3174d-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.405190 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e688916-64de-415a-86d9-b54a42d3174d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.405303 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e688916-64de-415a-86d9-b54a42d3174d-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.420662 4711 scope.go:117] "RemoveContainer" containerID="d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076" Dec 02 10:34:31 crc kubenswrapper[4711]: E1202 10:34:31.425767 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076\": container with ID starting with d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076 not found: ID does not exist" containerID="d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.425807 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076"} err="failed to get container status \"d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076\": rpc error: code = NotFound desc = could not find container \"d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076\": container with ID starting with d36f882c4391d8e8085f2ddb9948b2ba613885c26bcad041f3988a78eaf0c076 not found: ID does not exist" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.425831 4711 scope.go:117] "RemoveContainer" containerID="948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb" Dec 02 10:34:31 crc kubenswrapper[4711]: E1202 10:34:31.426623 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb\": container with ID starting with 948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb not found: ID does not exist" containerID="948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.426656 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb"} err="failed to get container status \"948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb\": rpc error: code = NotFound desc = could not find container \"948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb\": container with ID starting with 948d4abeac3dda590c11ac710803aadc52aa6444f767a9a2d20587b05030cbbb not found: ID does not exist" Dec 02 10:34:31 crc kubenswrapper[4711]: E1202 10:34:31.482140 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bbf095c_359d_4e14_95e8_d75e57a7f7c2.slice\": RecentStats: unable to find data in memory cache]" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.507090 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e688916-64de-415a-86d9-b54a42d3174d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.507163 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e688916-64de-415a-86d9-b54a42d3174d-logs\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.507190 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e688916-64de-415a-86d9-b54a42d3174d-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.507224 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e688916-64de-415a-86d9-b54a42d3174d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.507250 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e688916-64de-415a-86d9-b54a42d3174d-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.507270 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kphvr\" (UniqueName: \"kubernetes.io/projected/9e688916-64de-415a-86d9-b54a42d3174d-kube-api-access-kphvr\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.507309 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.507324 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e688916-64de-415a-86d9-b54a42d3174d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.508156 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e688916-64de-415a-86d9-b54a42d3174d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.508178 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.507944 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e688916-64de-415a-86d9-b54a42d3174d-logs\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.520651 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e688916-64de-415a-86d9-b54a42d3174d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.521330 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e688916-64de-415a-86d9-b54a42d3174d-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.522376 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e688916-64de-415a-86d9-b54a42d3174d-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.522623 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e688916-64de-415a-86d9-b54a42d3174d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.535844 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kphvr\" (UniqueName: \"kubernetes.io/projected/9e688916-64de-415a-86d9-b54a42d3174d-kube-api-access-kphvr\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.562878 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9e688916-64de-415a-86d9-b54a42d3174d\") " pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.695536 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 10:34:31 crc kubenswrapper[4711]: I1202 10:34:31.945266 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.120381 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbpgf\" (UniqueName: \"kubernetes.io/projected/6c592ed4-2527-4546-8500-5dfc26ee5dca-kube-api-access-hbpgf\") pod \"6c592ed4-2527-4546-8500-5dfc26ee5dca\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.120453 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-scripts\") pod \"6c592ed4-2527-4546-8500-5dfc26ee5dca\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.120520 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-combined-ca-bundle\") pod \"6c592ed4-2527-4546-8500-5dfc26ee5dca\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.120672 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-config-data\") pod \"6c592ed4-2527-4546-8500-5dfc26ee5dca\" (UID: \"6c592ed4-2527-4546-8500-5dfc26ee5dca\") " Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.131377 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-scripts" (OuterVolumeSpecName: "scripts") pod "6c592ed4-2527-4546-8500-5dfc26ee5dca" (UID: "6c592ed4-2527-4546-8500-5dfc26ee5dca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.145563 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c592ed4-2527-4546-8500-5dfc26ee5dca-kube-api-access-hbpgf" (OuterVolumeSpecName: "kube-api-access-hbpgf") pod "6c592ed4-2527-4546-8500-5dfc26ee5dca" (UID: "6c592ed4-2527-4546-8500-5dfc26ee5dca"). InnerVolumeSpecName "kube-api-access-hbpgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.164061 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c592ed4-2527-4546-8500-5dfc26ee5dca" (UID: "6c592ed4-2527-4546-8500-5dfc26ee5dca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.183494 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-config-data" (OuterVolumeSpecName: "config-data") pod "6c592ed4-2527-4546-8500-5dfc26ee5dca" (UID: "6c592ed4-2527-4546-8500-5dfc26ee5dca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.222920 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbpgf\" (UniqueName: \"kubernetes.io/projected/6c592ed4-2527-4546-8500-5dfc26ee5dca-kube-api-access-hbpgf\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.223175 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.223184 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.223192 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c592ed4-2527-4546-8500-5dfc26ee5dca-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.312696 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rsnn9" event={"ID":"6c592ed4-2527-4546-8500-5dfc26ee5dca","Type":"ContainerDied","Data":"9d24c8d9e923b3ba125d94bec9b833013a10df834b3c83791d1e7cb0531d83f3"} Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.312731 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d24c8d9e923b3ba125d94bec9b833013a10df834b3c83791d1e7cb0531d83f3" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.312813 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rsnn9" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.336039 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.409692 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 10:34:32 crc kubenswrapper[4711]: E1202 10:34:32.410213 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c592ed4-2527-4546-8500-5dfc26ee5dca" containerName="nova-cell0-conductor-db-sync" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.410235 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c592ed4-2527-4546-8500-5dfc26ee5dca" containerName="nova-cell0-conductor-db-sync" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.410490 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c592ed4-2527-4546-8500-5dfc26ee5dca" containerName="nova-cell0-conductor-db-sync" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.411275 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.414742 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.415211 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vlcc5" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.429842 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.528997 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcbsf\" (UniqueName: \"kubernetes.io/projected/d878b444-84f7-4c21-b377-91c45878b703-kube-api-access-kcbsf\") pod \"nova-cell0-conductor-0\" (UID: \"d878b444-84f7-4c21-b377-91c45878b703\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.529062 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d878b444-84f7-4c21-b377-91c45878b703-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d878b444-84f7-4c21-b377-91c45878b703\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.529117 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d878b444-84f7-4c21-b377-91c45878b703-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d878b444-84f7-4c21-b377-91c45878b703\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.631102 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d878b444-84f7-4c21-b377-91c45878b703-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d878b444-84f7-4c21-b377-91c45878b703\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.631203 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d878b444-84f7-4c21-b377-91c45878b703-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d878b444-84f7-4c21-b377-91c45878b703\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.631327 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcbsf\" (UniqueName: \"kubernetes.io/projected/d878b444-84f7-4c21-b377-91c45878b703-kube-api-access-kcbsf\") pod \"nova-cell0-conductor-0\" (UID: \"d878b444-84f7-4c21-b377-91c45878b703\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.645013 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d878b444-84f7-4c21-b377-91c45878b703-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d878b444-84f7-4c21-b377-91c45878b703\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.646098 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d878b444-84f7-4c21-b377-91c45878b703-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d878b444-84f7-4c21-b377-91c45878b703\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.646847 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcbsf\" (UniqueName: \"kubernetes.io/projected/d878b444-84f7-4c21-b377-91c45878b703-kube-api-access-kcbsf\") pod \"nova-cell0-conductor-0\" (UID: \"d878b444-84f7-4c21-b377-91c45878b703\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:32 crc kubenswrapper[4711]: I1202 10:34:32.741233 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:33 crc kubenswrapper[4711]: I1202 10:34:33.090116 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bbf095c-359d-4e14-95e8-d75e57a7f7c2" path="/var/lib/kubelet/pods/7bbf095c-359d-4e14-95e8-d75e57a7f7c2/volumes" Dec 02 10:34:33 crc kubenswrapper[4711]: I1202 10:34:33.241942 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 10:34:33 crc kubenswrapper[4711]: I1202 10:34:33.350882 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d878b444-84f7-4c21-b377-91c45878b703","Type":"ContainerStarted","Data":"b40d1e68680bb946e3a45f8c6e2af5a5d2f60b282ca769481c5a54f1f02d0a7c"} Dec 02 10:34:33 crc kubenswrapper[4711]: I1202 10:34:33.354830 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faca9a74-9df6-451f-b654-dfa3bfb71eda","Type":"ContainerStarted","Data":"0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85"} Dec 02 10:34:33 crc kubenswrapper[4711]: I1202 10:34:33.356597 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e688916-64de-415a-86d9-b54a42d3174d","Type":"ContainerStarted","Data":"ee303d73c4ab97849c3bdcefc15e05611021e89219a35ff6c46dc81bed2b8c11"} Dec 02 10:34:33 crc kubenswrapper[4711]: I1202 10:34:33.356620 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e688916-64de-415a-86d9-b54a42d3174d","Type":"ContainerStarted","Data":"16e143637ff653100926ef2d15a41374b2b4d0598f52d2105f851e230a2ff9ed"} Dec 02 10:34:33 crc kubenswrapper[4711]: I1202 10:34:33.585522 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:33 crc kubenswrapper[4711]: I1202 10:34:33.585567 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:33 crc kubenswrapper[4711]: I1202 10:34:33.623632 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:33 crc kubenswrapper[4711]: I1202 10:34:33.639800 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:34 crc kubenswrapper[4711]: I1202 10:34:34.368461 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faca9a74-9df6-451f-b654-dfa3bfb71eda","Type":"ContainerStarted","Data":"ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480"} Dec 02 10:34:34 crc kubenswrapper[4711]: I1202 10:34:34.370441 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d878b444-84f7-4c21-b377-91c45878b703","Type":"ContainerStarted","Data":"79eaa8197da18cbe98d6c6e5f48d71f643bab1cefc78b88fa590ee04727e6766"} Dec 02 10:34:34 crc kubenswrapper[4711]: I1202 10:34:34.370578 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:34 crc kubenswrapper[4711]: I1202 10:34:34.372915 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e688916-64de-415a-86d9-b54a42d3174d","Type":"ContainerStarted","Data":"bbd5f001eb40411dc1cd12f6be81bdd1c94fc8e5c3185249320ed51c665187d2"} Dec 02 10:34:34 crc kubenswrapper[4711]: I1202 10:34:34.373207 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:34 crc kubenswrapper[4711]: I1202 10:34:34.373235 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:34 crc kubenswrapper[4711]: I1202 10:34:34.398568 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.39854307 podStartE2EDuration="2.39854307s" podCreationTimestamp="2025-12-02 10:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:34.39044994 +0000 UTC m=+1264.099816397" watchObservedRunningTime="2025-12-02 10:34:34.39854307 +0000 UTC m=+1264.107909527" Dec 02 10:34:34 crc kubenswrapper[4711]: I1202 10:34:34.434381 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.43435801 podStartE2EDuration="3.43435801s" podCreationTimestamp="2025-12-02 10:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:34.425135681 +0000 UTC m=+1264.134502128" watchObservedRunningTime="2025-12-02 10:34:34.43435801 +0000 UTC m=+1264.143724457" Dec 02 10:34:35 crc kubenswrapper[4711]: I1202 10:34:35.388826 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faca9a74-9df6-451f-b654-dfa3bfb71eda","Type":"ContainerStarted","Data":"0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d"} Dec 02 10:34:36 crc kubenswrapper[4711]: I1202 10:34:36.304282 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:36 crc kubenswrapper[4711]: I1202 10:34:36.315598 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 10:34:37 crc kubenswrapper[4711]: I1202 10:34:37.411633 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faca9a74-9df6-451f-b654-dfa3bfb71eda","Type":"ContainerStarted","Data":"1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e"} Dec 02 10:34:37 crc kubenswrapper[4711]: I1202 10:34:37.446623 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.25875093 podStartE2EDuration="7.446600695s" podCreationTimestamp="2025-12-02 10:34:30 +0000 UTC" firstStartedPulling="2025-12-02 10:34:31.216916592 +0000 UTC m=+1260.926283039" lastFinishedPulling="2025-12-02 10:34:36.404766357 +0000 UTC m=+1266.114132804" observedRunningTime="2025-12-02 10:34:37.435410752 +0000 UTC m=+1267.144777229" watchObservedRunningTime="2025-12-02 10:34:37.446600695 +0000 UTC m=+1267.155967152" Dec 02 10:34:38 crc kubenswrapper[4711]: I1202 10:34:38.427212 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:34:41 crc kubenswrapper[4711]: I1202 10:34:41.695934 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 10:34:41 crc kubenswrapper[4711]: I1202 10:34:41.696349 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 10:34:41 crc kubenswrapper[4711]: E1202 10:34:41.713100 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bbf095c_359d_4e14_95e8_d75e57a7f7c2.slice\": RecentStats: unable to find data in memory cache]" Dec 02 10:34:41 crc kubenswrapper[4711]: I1202 10:34:41.736613 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 10:34:41 crc kubenswrapper[4711]: I1202 10:34:41.764012 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 10:34:42 crc kubenswrapper[4711]: I1202 10:34:42.479290 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 10:34:42 crc kubenswrapper[4711]: I1202 10:34:42.479372 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 10:34:42 crc kubenswrapper[4711]: I1202 10:34:42.790496 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.274903 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-vlft6"] Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.276582 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.280876 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.281106 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.289033 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vlft6"] Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.340605 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vlft6\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.340687 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72h2f\" (UniqueName: \"kubernetes.io/projected/174870ec-da5f-4488-866c-3dcdcdddedf2-kube-api-access-72h2f\") pod \"nova-cell0-cell-mapping-vlft6\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.340713 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-scripts\") pod \"nova-cell0-cell-mapping-vlft6\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.340739 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-config-data\") pod \"nova-cell0-cell-mapping-vlft6\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.442502 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-scripts\") pod \"nova-cell0-cell-mapping-vlft6\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.442831 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-config-data\") pod \"nova-cell0-cell-mapping-vlft6\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.443184 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vlft6\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.443354 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72h2f\" (UniqueName: \"kubernetes.io/projected/174870ec-da5f-4488-866c-3dcdcdddedf2-kube-api-access-72h2f\") pod \"nova-cell0-cell-mapping-vlft6\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.443542 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.444637 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.447473 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-scripts\") pod \"nova-cell0-cell-mapping-vlft6\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.449215 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-config-data\") pod \"nova-cell0-cell-mapping-vlft6\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.453278 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.462829 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vlft6\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.478404 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.478811 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72h2f\" (UniqueName: \"kubernetes.io/projected/174870ec-da5f-4488-866c-3dcdcdddedf2-kube-api-access-72h2f\") pod \"nova-cell0-cell-mapping-vlft6\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.594632 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4k4c\" (UniqueName: \"kubernetes.io/projected/a67f21af-6fc5-4ea8-88ce-36e7544879ee-kube-api-access-c4k4c\") pod \"nova-cell1-novncproxy-0\" (UID: \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.595003 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67f21af-6fc5-4ea8-88ce-36e7544879ee-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.595083 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67f21af-6fc5-4ea8-88ce-36e7544879ee-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.602763 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.622063 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.624737 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.643006 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.680774 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.696725 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51af35f-486c-4afc-9aa9-571d90f285fd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.697183 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4k4c\" (UniqueName: \"kubernetes.io/projected/a67f21af-6fc5-4ea8-88ce-36e7544879ee-kube-api-access-c4k4c\") pod \"nova-cell1-novncproxy-0\" (UID: \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.697321 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51af35f-486c-4afc-9aa9-571d90f285fd-config-data\") pod \"nova-api-0\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.697433 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zc5q\" (UniqueName: \"kubernetes.io/projected/b51af35f-486c-4afc-9aa9-571d90f285fd-kube-api-access-6zc5q\") pod \"nova-api-0\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.697556 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67f21af-6fc5-4ea8-88ce-36e7544879ee-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.697639 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67f21af-6fc5-4ea8-88ce-36e7544879ee-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.697831 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b51af35f-486c-4afc-9aa9-571d90f285fd-logs\") pod \"nova-api-0\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.708054 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67f21af-6fc5-4ea8-88ce-36e7544879ee-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.722481 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4k4c\" (UniqueName: \"kubernetes.io/projected/a67f21af-6fc5-4ea8-88ce-36e7544879ee-kube-api-access-c4k4c\") pod \"nova-cell1-novncproxy-0\" (UID: \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.722887 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67f21af-6fc5-4ea8-88ce-36e7544879ee-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.738293 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.740088 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.742125 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.792465 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.799165 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b51af35f-486c-4afc-9aa9-571d90f285fd-logs\") pod \"nova-api-0\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.801371 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b51af35f-486c-4afc-9aa9-571d90f285fd-logs\") pod \"nova-api-0\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.809993 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51af35f-486c-4afc-9aa9-571d90f285fd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.810535 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4gkj\" (UniqueName: \"kubernetes.io/projected/8c9b40de-a605-41cc-abde-3f65778c35c6-kube-api-access-p4gkj\") pod \"nova-metadata-0\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.810763 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c9b40de-a605-41cc-abde-3f65778c35c6-logs\") pod \"nova-metadata-0\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.810927 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51af35f-486c-4afc-9aa9-571d90f285fd-config-data\") pod \"nova-api-0\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.811075 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9b40de-a605-41cc-abde-3f65778c35c6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.811168 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9b40de-a605-41cc-abde-3f65778c35c6-config-data\") pod \"nova-metadata-0\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.811297 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zc5q\" (UniqueName: \"kubernetes.io/projected/b51af35f-486c-4afc-9aa9-571d90f285fd-kube-api-access-6zc5q\") pod \"nova-api-0\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.816690 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51af35f-486c-4afc-9aa9-571d90f285fd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.819740 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51af35f-486c-4afc-9aa9-571d90f285fd-config-data\") pod \"nova-api-0\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.830756 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zc5q\" (UniqueName: \"kubernetes.io/projected/b51af35f-486c-4afc-9aa9-571d90f285fd-kube-api-access-6zc5q\") pod \"nova-api-0\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " pod="openstack/nova-api-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.835223 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.838079 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.841620 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.893164 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.901548 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.912840 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9b40de-a605-41cc-abde-3f65778c35c6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.912893 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9b40de-a605-41cc-abde-3f65778c35c6-config-data\") pod \"nova-metadata-0\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.912936 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4defe9-4bbb-4c3a-be60-26186fc9d170-config-data\") pod \"nova-scheduler-0\" (UID: \"de4defe9-4bbb-4c3a-be60-26186fc9d170\") " pod="openstack/nova-scheduler-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.913064 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4defe9-4bbb-4c3a-be60-26186fc9d170-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"de4defe9-4bbb-4c3a-be60-26186fc9d170\") " pod="openstack/nova-scheduler-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.913131 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4gkj\" (UniqueName: \"kubernetes.io/projected/8c9b40de-a605-41cc-abde-3f65778c35c6-kube-api-access-p4gkj\") pod \"nova-metadata-0\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.913189 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzqrh\" (UniqueName: \"kubernetes.io/projected/de4defe9-4bbb-4c3a-be60-26186fc9d170-kube-api-access-xzqrh\") pod \"nova-scheduler-0\" (UID: \"de4defe9-4bbb-4c3a-be60-26186fc9d170\") " pod="openstack/nova-scheduler-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.913220 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c9b40de-a605-41cc-abde-3f65778c35c6-logs\") pod \"nova-metadata-0\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.913723 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c9b40de-a605-41cc-abde-3f65778c35c6-logs\") pod \"nova-metadata-0\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.915295 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-glhhc"] Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.920915 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.923888 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9b40de-a605-41cc-abde-3f65778c35c6-config-data\") pod \"nova-metadata-0\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.931368 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9b40de-a605-41cc-abde-3f65778c35c6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:43 crc kubenswrapper[4711]: I1202 10:34:43.936772 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4gkj\" (UniqueName: \"kubernetes.io/projected/8c9b40de-a605-41cc-abde-3f65778c35c6-kube-api-access-p4gkj\") pod \"nova-metadata-0\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.014204 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-glhhc"] Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.015465 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4defe9-4bbb-4c3a-be60-26186fc9d170-config-data\") pod \"nova-scheduler-0\" (UID: \"de4defe9-4bbb-4c3a-be60-26186fc9d170\") " pod="openstack/nova-scheduler-0" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.015503 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-svc\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.015523 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.015564 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d2h4\" (UniqueName: \"kubernetes.io/projected/f156af1a-5f12-4bd2-a98f-14a2310e2c78-kube-api-access-9d2h4\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.015606 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4defe9-4bbb-4c3a-be60-26186fc9d170-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"de4defe9-4bbb-4c3a-be60-26186fc9d170\") " pod="openstack/nova-scheduler-0" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.015629 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-config\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.015657 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.015700 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzqrh\" (UniqueName: \"kubernetes.io/projected/de4defe9-4bbb-4c3a-be60-26186fc9d170-kube-api-access-xzqrh\") pod \"nova-scheduler-0\" (UID: \"de4defe9-4bbb-4c3a-be60-26186fc9d170\") " pod="openstack/nova-scheduler-0" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.015727 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.027000 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4defe9-4bbb-4c3a-be60-26186fc9d170-config-data\") pod \"nova-scheduler-0\" (UID: \"de4defe9-4bbb-4c3a-be60-26186fc9d170\") " pod="openstack/nova-scheduler-0" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.027492 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4defe9-4bbb-4c3a-be60-26186fc9d170-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"de4defe9-4bbb-4c3a-be60-26186fc9d170\") " pod="openstack/nova-scheduler-0" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.052152 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzqrh\" (UniqueName: \"kubernetes.io/projected/de4defe9-4bbb-4c3a-be60-26186fc9d170-kube-api-access-xzqrh\") pod \"nova-scheduler-0\" (UID: \"de4defe9-4bbb-4c3a-be60-26186fc9d170\") " pod="openstack/nova-scheduler-0" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.109905 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.117291 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.117415 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.117524 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-svc\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.117552 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.117600 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d2h4\" (UniqueName: \"kubernetes.io/projected/f156af1a-5f12-4bd2-a98f-14a2310e2c78-kube-api-access-9d2h4\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.117688 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-config\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.118703 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-config\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.119917 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.123281 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.124337 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.124753 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-svc\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.139008 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.175161 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.181177 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d2h4\" (UniqueName: \"kubernetes.io/projected/f156af1a-5f12-4bd2-a98f-14a2310e2c78-kube-api-access-9d2h4\") pod \"dnsmasq-dns-bccf8f775-glhhc\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.292359 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.467158 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vlft6"] Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.561861 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vlft6" event={"ID":"174870ec-da5f-4488-866c-3dcdcdddedf2","Type":"ContainerStarted","Data":"74cc731c6257d737325e363ac914105804a326873365632e51593a2befb7d06c"} Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.607869 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.732901 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6z9lh"] Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.734533 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.738710 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.738787 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.755275 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6z9lh"] Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.801186 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.845170 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm64n\" (UniqueName: \"kubernetes.io/projected/e8df297e-2c28-4d8e-8c48-46bbaef36487-kube-api-access-bm64n\") pod \"nova-cell1-conductor-db-sync-6z9lh\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.845269 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-scripts\") pod \"nova-cell1-conductor-db-sync-6z9lh\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.845615 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-config-data\") pod \"nova-cell1-conductor-db-sync-6z9lh\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.846084 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6z9lh\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.888299 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.895920 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.947629 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm64n\" (UniqueName: \"kubernetes.io/projected/e8df297e-2c28-4d8e-8c48-46bbaef36487-kube-api-access-bm64n\") pod \"nova-cell1-conductor-db-sync-6z9lh\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.947704 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-scripts\") pod \"nova-cell1-conductor-db-sync-6z9lh\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.947750 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-config-data\") pod \"nova-cell1-conductor-db-sync-6z9lh\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.947823 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6z9lh\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.953473 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-scripts\") pod \"nova-cell1-conductor-db-sync-6z9lh\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.953547 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-config-data\") pod \"nova-cell1-conductor-db-sync-6z9lh\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.953566 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6z9lh\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:44 crc kubenswrapper[4711]: I1202 10:34:44.965290 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm64n\" (UniqueName: \"kubernetes.io/projected/e8df297e-2c28-4d8e-8c48-46bbaef36487-kube-api-access-bm64n\") pod \"nova-cell1-conductor-db-sync-6z9lh\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.023762 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-glhhc"] Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.067680 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.435345 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.435827 4711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.544943 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6z9lh"] Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.625029 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b51af35f-486c-4afc-9aa9-571d90f285fd","Type":"ContainerStarted","Data":"dcb4416a85b86e54766cf85294bc6f339be04c9bd3db8d3e62c5484bb7d8665d"} Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.629934 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vlft6" event={"ID":"174870ec-da5f-4488-866c-3dcdcdddedf2","Type":"ContainerStarted","Data":"b30e4cd33baa5dbbc18f184f2d383d80788c4e3be7e8e4ed3accd2f020181bfd"} Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.652979 4711 generic.go:334] "Generic (PLEG): container finished" podID="f156af1a-5f12-4bd2-a98f-14a2310e2c78" containerID="0041beba60573df95450890bce830ae365ddc2ec9e0305c0859cf45ef98c34db" exitCode=0 Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.654375 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" event={"ID":"f156af1a-5f12-4bd2-a98f-14a2310e2c78","Type":"ContainerDied","Data":"0041beba60573df95450890bce830ae365ddc2ec9e0305c0859cf45ef98c34db"} Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.654421 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" event={"ID":"f156af1a-5f12-4bd2-a98f-14a2310e2c78","Type":"ContainerStarted","Data":"7a6b415b756403ded11b99ccd35babcfbaa859bf8ecde3e35779d6d6b05207fc"} Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.679594 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.680150 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a67f21af-6fc5-4ea8-88ce-36e7544879ee","Type":"ContainerStarted","Data":"973696308cc3a82994c3d66e7ebae7fad7ec982bd8527eb59731e0ae7ded0ec2"} Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.689157 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c9b40de-a605-41cc-abde-3f65778c35c6","Type":"ContainerStarted","Data":"e314208d0964ebce2ca647460c3ddbad8d029c13dcb87ad91f3cc989e20a9e7d"} Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.694766 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6z9lh" event={"ID":"e8df297e-2c28-4d8e-8c48-46bbaef36487","Type":"ContainerStarted","Data":"c390dd64c1fac766e81ec633b07057a6ad68e2b8ca75fa0c9bffc9a507b9313f"} Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.696939 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"de4defe9-4bbb-4c3a-be60-26186fc9d170","Type":"ContainerStarted","Data":"3f0ca24e9903093cc73487b115a7bd89467c177fd97bed8f81c4f13f173b396e"} Dec 02 10:34:45 crc kubenswrapper[4711]: I1202 10:34:45.720306 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-vlft6" podStartSLOduration=2.720284798 podStartE2EDuration="2.720284798s" podCreationTimestamp="2025-12-02 10:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:45.660442235 +0000 UTC m=+1275.369808682" watchObservedRunningTime="2025-12-02 10:34:45.720284798 +0000 UTC m=+1275.429651245" Dec 02 10:34:46 crc kubenswrapper[4711]: I1202 10:34:46.708680 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6z9lh" event={"ID":"e8df297e-2c28-4d8e-8c48-46bbaef36487","Type":"ContainerStarted","Data":"a95615ebf49ca8da7157680c43bcc06396b581988046b9c47f0a33aa88e234fa"} Dec 02 10:34:46 crc kubenswrapper[4711]: I1202 10:34:46.711310 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" event={"ID":"f156af1a-5f12-4bd2-a98f-14a2310e2c78","Type":"ContainerStarted","Data":"e1e038cceb600722916e1e2b28551ee529e9a291fd9a4410af8b2f84d2b6ad6d"} Dec 02 10:34:46 crc kubenswrapper[4711]: I1202 10:34:46.724416 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-6z9lh" podStartSLOduration=2.724394013 podStartE2EDuration="2.724394013s" podCreationTimestamp="2025-12-02 10:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:46.723033456 +0000 UTC m=+1276.432399913" watchObservedRunningTime="2025-12-02 10:34:46.724394013 +0000 UTC m=+1276.433760470" Dec 02 10:34:46 crc kubenswrapper[4711]: I1202 10:34:46.743848 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" podStartSLOduration=3.74383078 podStartE2EDuration="3.74383078s" podCreationTimestamp="2025-12-02 10:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:46.74270989 +0000 UTC m=+1276.452076337" watchObservedRunningTime="2025-12-02 10:34:46.74383078 +0000 UTC m=+1276.453197227" Dec 02 10:34:47 crc kubenswrapper[4711]: I1202 10:34:47.722481 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.308776 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.330456 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.730102 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c9b40de-a605-41cc-abde-3f65778c35c6","Type":"ContainerStarted","Data":"0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275"} Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.730146 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c9b40de-a605-41cc-abde-3f65778c35c6","Type":"ContainerStarted","Data":"1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982"} Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.730261 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8c9b40de-a605-41cc-abde-3f65778c35c6" containerName="nova-metadata-log" containerID="cri-o://1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982" gracePeriod=30 Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.730324 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8c9b40de-a605-41cc-abde-3f65778c35c6" containerName="nova-metadata-metadata" containerID="cri-o://0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275" gracePeriod=30 Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.732266 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"de4defe9-4bbb-4c3a-be60-26186fc9d170","Type":"ContainerStarted","Data":"47c3ea9bb9a36334387b4abd55aed24a87616c816ffd361cc3deb09f328c838b"} Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.745115 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b51af35f-486c-4afc-9aa9-571d90f285fd","Type":"ContainerStarted","Data":"0db7e7797d9ddbc0d6e550d7e660b2f8b2262d100557ff87f9106fb05ef77873"} Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.745194 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b51af35f-486c-4afc-9aa9-571d90f285fd","Type":"ContainerStarted","Data":"5869783373d879c8c8a3287eee8ea95faea9a64572cc3d8a653bc93602a2df71"} Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.750642 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a67f21af-6fc5-4ea8-88ce-36e7544879ee","Type":"ContainerStarted","Data":"7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f"} Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.750685 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a67f21af-6fc5-4ea8-88ce-36e7544879ee" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f" gracePeriod=30 Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.765659 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.716274069 podStartE2EDuration="5.76564096s" podCreationTimestamp="2025-12-02 10:34:43 +0000 UTC" firstStartedPulling="2025-12-02 10:34:44.905403813 +0000 UTC m=+1274.614770260" lastFinishedPulling="2025-12-02 10:34:47.954770704 +0000 UTC m=+1277.664137151" observedRunningTime="2025-12-02 10:34:48.762637129 +0000 UTC m=+1278.472003606" watchObservedRunningTime="2025-12-02 10:34:48.76564096 +0000 UTC m=+1278.475007407" Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.784963 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.610969794 podStartE2EDuration="5.784933783s" podCreationTimestamp="2025-12-02 10:34:43 +0000 UTC" firstStartedPulling="2025-12-02 10:34:44.777133745 +0000 UTC m=+1274.486500192" lastFinishedPulling="2025-12-02 10:34:47.951097734 +0000 UTC m=+1277.660464181" observedRunningTime="2025-12-02 10:34:48.783242076 +0000 UTC m=+1278.492608523" watchObservedRunningTime="2025-12-02 10:34:48.784933783 +0000 UTC m=+1278.494300230" Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.833470 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.51051872 podStartE2EDuration="5.833451178s" podCreationTimestamp="2025-12-02 10:34:43 +0000 UTC" firstStartedPulling="2025-12-02 10:34:44.628106404 +0000 UTC m=+1274.337472851" lastFinishedPulling="2025-12-02 10:34:47.951038862 +0000 UTC m=+1277.660405309" observedRunningTime="2025-12-02 10:34:48.796869886 +0000 UTC m=+1278.506236333" watchObservedRunningTime="2025-12-02 10:34:48.833451178 +0000 UTC m=+1278.542817625" Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.838647 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.793353649 podStartE2EDuration="5.838636499s" podCreationTimestamp="2025-12-02 10:34:43 +0000 UTC" firstStartedPulling="2025-12-02 10:34:44.905754042 +0000 UTC m=+1274.615120489" lastFinishedPulling="2025-12-02 10:34:47.951036892 +0000 UTC m=+1277.660403339" observedRunningTime="2025-12-02 10:34:48.815933444 +0000 UTC m=+1278.525299891" watchObservedRunningTime="2025-12-02 10:34:48.838636499 +0000 UTC m=+1278.548002946" Dec 02 10:34:48 crc kubenswrapper[4711]: I1202 10:34:48.894041 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.139859 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.139908 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.177844 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.624436 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.757807 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4gkj\" (UniqueName: \"kubernetes.io/projected/8c9b40de-a605-41cc-abde-3f65778c35c6-kube-api-access-p4gkj\") pod \"8c9b40de-a605-41cc-abde-3f65778c35c6\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.757927 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9b40de-a605-41cc-abde-3f65778c35c6-combined-ca-bundle\") pod \"8c9b40de-a605-41cc-abde-3f65778c35c6\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.758073 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c9b40de-a605-41cc-abde-3f65778c35c6-logs\") pod \"8c9b40de-a605-41cc-abde-3f65778c35c6\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.758154 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9b40de-a605-41cc-abde-3f65778c35c6-config-data\") pod \"8c9b40de-a605-41cc-abde-3f65778c35c6\" (UID: \"8c9b40de-a605-41cc-abde-3f65778c35c6\") " Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.759992 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c9b40de-a605-41cc-abde-3f65778c35c6-logs" (OuterVolumeSpecName: "logs") pod "8c9b40de-a605-41cc-abde-3f65778c35c6" (UID: "8c9b40de-a605-41cc-abde-3f65778c35c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.763963 4711 generic.go:334] "Generic (PLEG): container finished" podID="8c9b40de-a605-41cc-abde-3f65778c35c6" containerID="0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275" exitCode=0 Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.763992 4711 generic.go:334] "Generic (PLEG): container finished" podID="8c9b40de-a605-41cc-abde-3f65778c35c6" containerID="1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982" exitCode=143 Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.764077 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.764176 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c9b40de-a605-41cc-abde-3f65778c35c6","Type":"ContainerDied","Data":"0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275"} Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.764226 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c9b40de-a605-41cc-abde-3f65778c35c6","Type":"ContainerDied","Data":"1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982"} Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.764249 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c9b40de-a605-41cc-abde-3f65778c35c6","Type":"ContainerDied","Data":"e314208d0964ebce2ca647460c3ddbad8d029c13dcb87ad91f3cc989e20a9e7d"} Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.764278 4711 scope.go:117] "RemoveContainer" containerID="0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.767213 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c9b40de-a605-41cc-abde-3f65778c35c6-kube-api-access-p4gkj" (OuterVolumeSpecName: "kube-api-access-p4gkj") pod "8c9b40de-a605-41cc-abde-3f65778c35c6" (UID: "8c9b40de-a605-41cc-abde-3f65778c35c6"). InnerVolumeSpecName "kube-api-access-p4gkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.794419 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c9b40de-a605-41cc-abde-3f65778c35c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c9b40de-a605-41cc-abde-3f65778c35c6" (UID: "8c9b40de-a605-41cc-abde-3f65778c35c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.799151 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c9b40de-a605-41cc-abde-3f65778c35c6-config-data" (OuterVolumeSpecName: "config-data") pod "8c9b40de-a605-41cc-abde-3f65778c35c6" (UID: "8c9b40de-a605-41cc-abde-3f65778c35c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.839079 4711 scope.go:117] "RemoveContainer" containerID="1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.855701 4711 scope.go:117] "RemoveContainer" containerID="0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275" Dec 02 10:34:49 crc kubenswrapper[4711]: E1202 10:34:49.856331 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275\": container with ID starting with 0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275 not found: ID does not exist" containerID="0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.856374 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275"} err="failed to get container status \"0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275\": rpc error: code = NotFound desc = could not find container \"0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275\": container with ID starting with 0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275 not found: ID does not exist" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.856403 4711 scope.go:117] "RemoveContainer" containerID="1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982" Dec 02 10:34:49 crc kubenswrapper[4711]: E1202 10:34:49.856732 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982\": container with ID starting with 1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982 not found: ID does not exist" containerID="1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.856759 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982"} err="failed to get container status \"1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982\": rpc error: code = NotFound desc = could not find container \"1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982\": container with ID starting with 1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982 not found: ID does not exist" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.856772 4711 scope.go:117] "RemoveContainer" containerID="0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.857495 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275"} err="failed to get container status \"0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275\": rpc error: code = NotFound desc = could not find container \"0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275\": container with ID starting with 0cde65dc6344c84c8f5c888234105b6185d0d5f610041ab687cd4f9767c40275 not found: ID does not exist" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.857541 4711 scope.go:117] "RemoveContainer" containerID="1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.857795 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982"} err="failed to get container status \"1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982\": rpc error: code = NotFound desc = could not find container \"1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982\": container with ID starting with 1e96be8adc4f31bbff6984f0b31640efaad8dcb0076d54c88a71d287044bc982 not found: ID does not exist" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.860058 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c9b40de-a605-41cc-abde-3f65778c35c6-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.860076 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c9b40de-a605-41cc-abde-3f65778c35c6-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.860086 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4gkj\" (UniqueName: \"kubernetes.io/projected/8c9b40de-a605-41cc-abde-3f65778c35c6-kube-api-access-p4gkj\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:49 crc kubenswrapper[4711]: I1202 10:34:49.860096 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c9b40de-a605-41cc-abde-3f65778c35c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.102232 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.113929 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.130514 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:50 crc kubenswrapper[4711]: E1202 10:34:50.130983 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c9b40de-a605-41cc-abde-3f65778c35c6" containerName="nova-metadata-log" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.131007 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c9b40de-a605-41cc-abde-3f65778c35c6" containerName="nova-metadata-log" Dec 02 10:34:50 crc kubenswrapper[4711]: E1202 10:34:50.131042 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c9b40de-a605-41cc-abde-3f65778c35c6" containerName="nova-metadata-metadata" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.131052 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c9b40de-a605-41cc-abde-3f65778c35c6" containerName="nova-metadata-metadata" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.131271 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c9b40de-a605-41cc-abde-3f65778c35c6" containerName="nova-metadata-log" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.131314 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c9b40de-a605-41cc-abde-3f65778c35c6" containerName="nova-metadata-metadata" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.132783 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.136510 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.136661 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.150766 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.266747 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90c877d1-b901-4265-a20e-28cb2f8cd133-logs\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.266813 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8kf7\" (UniqueName: \"kubernetes.io/projected/90c877d1-b901-4265-a20e-28cb2f8cd133-kube-api-access-j8kf7\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.267110 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.267331 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.267389 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-config-data\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.369145 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.369200 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-config-data\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.369272 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90c877d1-b901-4265-a20e-28cb2f8cd133-logs\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.369297 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8kf7\" (UniqueName: \"kubernetes.io/projected/90c877d1-b901-4265-a20e-28cb2f8cd133-kube-api-access-j8kf7\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.369360 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.369779 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90c877d1-b901-4265-a20e-28cb2f8cd133-logs\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.372755 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.372778 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-config-data\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.381918 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.391119 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8kf7\" (UniqueName: \"kubernetes.io/projected/90c877d1-b901-4265-a20e-28cb2f8cd133-kube-api-access-j8kf7\") pod \"nova-metadata-0\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.465068 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:34:50 crc kubenswrapper[4711]: I1202 10:34:50.977026 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:51 crc kubenswrapper[4711]: I1202 10:34:51.094971 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c9b40de-a605-41cc-abde-3f65778c35c6" path="/var/lib/kubelet/pods/8c9b40de-a605-41cc-abde-3f65778c35c6/volumes" Dec 02 10:34:51 crc kubenswrapper[4711]: I1202 10:34:51.839937 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90c877d1-b901-4265-a20e-28cb2f8cd133","Type":"ContainerStarted","Data":"9eff5131cca5df485ae9728b6254e41988f854c85be8be05c318e8350292a096"} Dec 02 10:34:51 crc kubenswrapper[4711]: I1202 10:34:51.840364 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90c877d1-b901-4265-a20e-28cb2f8cd133","Type":"ContainerStarted","Data":"64e0295aed59adfe32348110ec3882104b7d6a34fe306f2956e3a6e20e7b1226"} Dec 02 10:34:51 crc kubenswrapper[4711]: I1202 10:34:51.840378 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90c877d1-b901-4265-a20e-28cb2f8cd133","Type":"ContainerStarted","Data":"18d8f8a2e50e7720cfdbbbae77fc2a6657e101b23f6236d42801784efe5b05f9"} Dec 02 10:34:51 crc kubenswrapper[4711]: I1202 10:34:51.860796 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.860771081 podStartE2EDuration="1.860771081s" podCreationTimestamp="2025-12-02 10:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:51.85554695 +0000 UTC m=+1281.564913427" watchObservedRunningTime="2025-12-02 10:34:51.860771081 +0000 UTC m=+1281.570137548" Dec 02 10:34:51 crc kubenswrapper[4711]: E1202 10:34:51.962154 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bbf095c_359d_4e14_95e8_d75e57a7f7c2.slice\": RecentStats: unable to find data in memory cache]" Dec 02 10:34:52 crc kubenswrapper[4711]: I1202 10:34:52.854765 4711 generic.go:334] "Generic (PLEG): container finished" podID="e8df297e-2c28-4d8e-8c48-46bbaef36487" containerID="a95615ebf49ca8da7157680c43bcc06396b581988046b9c47f0a33aa88e234fa" exitCode=0 Dec 02 10:34:52 crc kubenswrapper[4711]: I1202 10:34:52.854910 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6z9lh" event={"ID":"e8df297e-2c28-4d8e-8c48-46bbaef36487","Type":"ContainerDied","Data":"a95615ebf49ca8da7157680c43bcc06396b581988046b9c47f0a33aa88e234fa"} Dec 02 10:34:52 crc kubenswrapper[4711]: I1202 10:34:52.857661 4711 generic.go:334] "Generic (PLEG): container finished" podID="174870ec-da5f-4488-866c-3dcdcdddedf2" containerID="b30e4cd33baa5dbbc18f184f2d383d80788c4e3be7e8e4ed3accd2f020181bfd" exitCode=0 Dec 02 10:34:52 crc kubenswrapper[4711]: I1202 10:34:52.857787 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vlft6" event={"ID":"174870ec-da5f-4488-866c-3dcdcdddedf2","Type":"ContainerDied","Data":"b30e4cd33baa5dbbc18f184f2d383d80788c4e3be7e8e4ed3accd2f020181bfd"} Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.111117 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.111428 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.177897 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.218938 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.302102 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.381267 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gjj5v"] Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.381498 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" podUID="f4b309e7-82c2-4d00-9f80-ff4789ddd307" containerName="dnsmasq-dns" containerID="cri-o://ffb9c6b35897cd1dea0632fd5cfde3deb118a137560621fd694bfd40bd7c7a0b" gracePeriod=10 Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.463307 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.465794 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.556460 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-combined-ca-bundle\") pod \"e8df297e-2c28-4d8e-8c48-46bbaef36487\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.556514 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-config-data\") pod \"174870ec-da5f-4488-866c-3dcdcdddedf2\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.556632 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72h2f\" (UniqueName: \"kubernetes.io/projected/174870ec-da5f-4488-866c-3dcdcdddedf2-kube-api-access-72h2f\") pod \"174870ec-da5f-4488-866c-3dcdcdddedf2\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.556725 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm64n\" (UniqueName: \"kubernetes.io/projected/e8df297e-2c28-4d8e-8c48-46bbaef36487-kube-api-access-bm64n\") pod \"e8df297e-2c28-4d8e-8c48-46bbaef36487\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.556774 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-config-data\") pod \"e8df297e-2c28-4d8e-8c48-46bbaef36487\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.556799 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-scripts\") pod \"174870ec-da5f-4488-866c-3dcdcdddedf2\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.556888 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-combined-ca-bundle\") pod \"174870ec-da5f-4488-866c-3dcdcdddedf2\" (UID: \"174870ec-da5f-4488-866c-3dcdcdddedf2\") " Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.556912 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-scripts\") pod \"e8df297e-2c28-4d8e-8c48-46bbaef36487\" (UID: \"e8df297e-2c28-4d8e-8c48-46bbaef36487\") " Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.565403 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-scripts" (OuterVolumeSpecName: "scripts") pod "e8df297e-2c28-4d8e-8c48-46bbaef36487" (UID: "e8df297e-2c28-4d8e-8c48-46bbaef36487"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.573625 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-scripts" (OuterVolumeSpecName: "scripts") pod "174870ec-da5f-4488-866c-3dcdcdddedf2" (UID: "174870ec-da5f-4488-866c-3dcdcdddedf2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.573767 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174870ec-da5f-4488-866c-3dcdcdddedf2-kube-api-access-72h2f" (OuterVolumeSpecName: "kube-api-access-72h2f") pod "174870ec-da5f-4488-866c-3dcdcdddedf2" (UID: "174870ec-da5f-4488-866c-3dcdcdddedf2"). InnerVolumeSpecName "kube-api-access-72h2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.581311 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8df297e-2c28-4d8e-8c48-46bbaef36487-kube-api-access-bm64n" (OuterVolumeSpecName: "kube-api-access-bm64n") pod "e8df297e-2c28-4d8e-8c48-46bbaef36487" (UID: "e8df297e-2c28-4d8e-8c48-46bbaef36487"). InnerVolumeSpecName "kube-api-access-bm64n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.598292 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-config-data" (OuterVolumeSpecName: "config-data") pod "e8df297e-2c28-4d8e-8c48-46bbaef36487" (UID: "e8df297e-2c28-4d8e-8c48-46bbaef36487"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.607116 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8df297e-2c28-4d8e-8c48-46bbaef36487" (UID: "e8df297e-2c28-4d8e-8c48-46bbaef36487"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.609129 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "174870ec-da5f-4488-866c-3dcdcdddedf2" (UID: "174870ec-da5f-4488-866c-3dcdcdddedf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.610937 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-config-data" (OuterVolumeSpecName: "config-data") pod "174870ec-da5f-4488-866c-3dcdcdddedf2" (UID: "174870ec-da5f-4488-866c-3dcdcdddedf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.661072 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.661118 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.661130 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.661203 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.661223 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72h2f\" (UniqueName: \"kubernetes.io/projected/174870ec-da5f-4488-866c-3dcdcdddedf2-kube-api-access-72h2f\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.661238 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm64n\" (UniqueName: \"kubernetes.io/projected/e8df297e-2c28-4d8e-8c48-46bbaef36487-kube-api-access-bm64n\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.661250 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8df297e-2c28-4d8e-8c48-46bbaef36487-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.661261 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174870ec-da5f-4488-866c-3dcdcdddedf2-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.876542 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6z9lh" event={"ID":"e8df297e-2c28-4d8e-8c48-46bbaef36487","Type":"ContainerDied","Data":"c390dd64c1fac766e81ec633b07057a6ad68e2b8ca75fa0c9bffc9a507b9313f"} Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.876578 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6z9lh" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.876586 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c390dd64c1fac766e81ec633b07057a6ad68e2b8ca75fa0c9bffc9a507b9313f" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.878315 4711 generic.go:334] "Generic (PLEG): container finished" podID="f4b309e7-82c2-4d00-9f80-ff4789ddd307" containerID="ffb9c6b35897cd1dea0632fd5cfde3deb118a137560621fd694bfd40bd7c7a0b" exitCode=0 Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.878426 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" event={"ID":"f4b309e7-82c2-4d00-9f80-ff4789ddd307","Type":"ContainerDied","Data":"ffb9c6b35897cd1dea0632fd5cfde3deb118a137560621fd694bfd40bd7c7a0b"} Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.880417 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vlft6" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.886360 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vlft6" event={"ID":"174870ec-da5f-4488-866c-3dcdcdddedf2","Type":"ContainerDied","Data":"74cc731c6257d737325e363ac914105804a326873365632e51593a2befb7d06c"} Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.886394 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74cc731c6257d737325e363ac914105804a326873365632e51593a2befb7d06c" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.949069 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.968061 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 10:34:54 crc kubenswrapper[4711]: E1202 10:34:54.974172 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8df297e-2c28-4d8e-8c48-46bbaef36487" containerName="nova-cell1-conductor-db-sync" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.974208 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8df297e-2c28-4d8e-8c48-46bbaef36487" containerName="nova-cell1-conductor-db-sync" Dec 02 10:34:54 crc kubenswrapper[4711]: E1202 10:34:54.974241 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174870ec-da5f-4488-866c-3dcdcdddedf2" containerName="nova-manage" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.974248 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="174870ec-da5f-4488-866c-3dcdcdddedf2" containerName="nova-manage" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.974521 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="174870ec-da5f-4488-866c-3dcdcdddedf2" containerName="nova-manage" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.974540 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8df297e-2c28-4d8e-8c48-46bbaef36487" containerName="nova-cell1-conductor-db-sync" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.975203 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.977711 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 02 10:34:54 crc kubenswrapper[4711]: I1202 10:34:54.986652 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.069427 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.069506 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnq7h\" (UniqueName: \"kubernetes.io/projected/e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2-kube-api-access-pnq7h\") pod \"nova-cell1-conductor-0\" (UID: \"e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.069719 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.144533 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.145121 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="90c877d1-b901-4265-a20e-28cb2f8cd133" containerName="nova-metadata-log" containerID="cri-o://64e0295aed59adfe32348110ec3882104b7d6a34fe306f2956e3a6e20e7b1226" gracePeriod=30 Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.145601 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="90c877d1-b901-4265-a20e-28cb2f8cd133" containerName="nova-metadata-metadata" containerID="cri-o://9eff5131cca5df485ae9728b6254e41988f854c85be8be05c318e8350292a096" gracePeriod=30 Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.160648 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.160889 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b51af35f-486c-4afc-9aa9-571d90f285fd" containerName="nova-api-log" containerID="cri-o://5869783373d879c8c8a3287eee8ea95faea9a64572cc3d8a653bc93602a2df71" gracePeriod=30 Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.161060 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b51af35f-486c-4afc-9aa9-571d90f285fd" containerName="nova-api-api" containerID="cri-o://0db7e7797d9ddbc0d6e550d7e660b2f8b2262d100557ff87f9106fb05ef77873" gracePeriod=30 Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.180598 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.180670 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnq7h\" (UniqueName: \"kubernetes.io/projected/e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2-kube-api-access-pnq7h\") pod \"nova-cell1-conductor-0\" (UID: \"e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.180697 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.183143 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b51af35f-486c-4afc-9aa9-571d90f285fd" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.183389 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b51af35f-486c-4afc-9aa9-571d90f285fd" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.188545 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.197309 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.204408 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnq7h\" (UniqueName: \"kubernetes.io/projected/e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2-kube-api-access-pnq7h\") pod \"nova-cell1-conductor-0\" (UID: \"e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.293711 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.466707 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.466796 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.492213 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.593454 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-ovsdbserver-sb\") pod \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.593502 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-config\") pod \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.593644 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxvkt\" (UniqueName: \"kubernetes.io/projected/f4b309e7-82c2-4d00-9f80-ff4789ddd307-kube-api-access-kxvkt\") pod \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.593694 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-dns-swift-storage-0\") pod \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.593756 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-ovsdbserver-nb\") pod \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.593803 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-dns-svc\") pod \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\" (UID: \"f4b309e7-82c2-4d00-9f80-ff4789ddd307\") " Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.607995 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b309e7-82c2-4d00-9f80-ff4789ddd307-kube-api-access-kxvkt" (OuterVolumeSpecName: "kube-api-access-kxvkt") pod "f4b309e7-82c2-4d00-9f80-ff4789ddd307" (UID: "f4b309e7-82c2-4d00-9f80-ff4789ddd307"). InnerVolumeSpecName "kube-api-access-kxvkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.676674 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4b309e7-82c2-4d00-9f80-ff4789ddd307" (UID: "f4b309e7-82c2-4d00-9f80-ff4789ddd307"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.697728 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxvkt\" (UniqueName: \"kubernetes.io/projected/f4b309e7-82c2-4d00-9f80-ff4789ddd307-kube-api-access-kxvkt\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.697758 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.701200 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4b309e7-82c2-4d00-9f80-ff4789ddd307" (UID: "f4b309e7-82c2-4d00-9f80-ff4789ddd307"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.711903 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.724174 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f4b309e7-82c2-4d00-9f80-ff4789ddd307" (UID: "f4b309e7-82c2-4d00-9f80-ff4789ddd307"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.728385 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4b309e7-82c2-4d00-9f80-ff4789ddd307" (UID: "f4b309e7-82c2-4d00-9f80-ff4789ddd307"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.737376 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-config" (OuterVolumeSpecName: "config") pod "f4b309e7-82c2-4d00-9f80-ff4789ddd307" (UID: "f4b309e7-82c2-4d00-9f80-ff4789ddd307"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.799580 4711 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.799613 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.799624 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.799633 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b309e7-82c2-4d00-9f80-ff4789ddd307-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.895487 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" event={"ID":"f4b309e7-82c2-4d00-9f80-ff4789ddd307","Type":"ContainerDied","Data":"fcbb83faf96da0e9362a6a76fc0279f85674d614fb9db448095364e21bf949fb"} Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.895797 4711 scope.go:117] "RemoveContainer" containerID="ffb9c6b35897cd1dea0632fd5cfde3deb118a137560621fd694bfd40bd7c7a0b" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.896047 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gjj5v" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.922503 4711 generic.go:334] "Generic (PLEG): container finished" podID="b51af35f-486c-4afc-9aa9-571d90f285fd" containerID="5869783373d879c8c8a3287eee8ea95faea9a64572cc3d8a653bc93602a2df71" exitCode=143 Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.922608 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b51af35f-486c-4afc-9aa9-571d90f285fd","Type":"ContainerDied","Data":"5869783373d879c8c8a3287eee8ea95faea9a64572cc3d8a653bc93602a2df71"} Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.927710 4711 generic.go:334] "Generic (PLEG): container finished" podID="90c877d1-b901-4265-a20e-28cb2f8cd133" containerID="9eff5131cca5df485ae9728b6254e41988f854c85be8be05c318e8350292a096" exitCode=0 Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.927764 4711 generic.go:334] "Generic (PLEG): container finished" podID="90c877d1-b901-4265-a20e-28cb2f8cd133" containerID="64e0295aed59adfe32348110ec3882104b7d6a34fe306f2956e3a6e20e7b1226" exitCode=143 Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.928021 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90c877d1-b901-4265-a20e-28cb2f8cd133","Type":"ContainerDied","Data":"9eff5131cca5df485ae9728b6254e41988f854c85be8be05c318e8350292a096"} Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.928361 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90c877d1-b901-4265-a20e-28cb2f8cd133","Type":"ContainerDied","Data":"64e0295aed59adfe32348110ec3882104b7d6a34fe306f2956e3a6e20e7b1226"} Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.950423 4711 scope.go:117] "RemoveContainer" containerID="3fb5dc5d29faff5845344700b5e1b2d2031d93c525296829d4a377dd8a837184" Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.967354 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gjj5v"] Dec 02 10:34:55 crc kubenswrapper[4711]: I1202 10:34:55.979822 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gjj5v"] Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.021449 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.060151 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.104269 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-nova-metadata-tls-certs\") pod \"90c877d1-b901-4265-a20e-28cb2f8cd133\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.104533 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8kf7\" (UniqueName: \"kubernetes.io/projected/90c877d1-b901-4265-a20e-28cb2f8cd133-kube-api-access-j8kf7\") pod \"90c877d1-b901-4265-a20e-28cb2f8cd133\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.104682 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90c877d1-b901-4265-a20e-28cb2f8cd133-logs\") pod \"90c877d1-b901-4265-a20e-28cb2f8cd133\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.104822 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-combined-ca-bundle\") pod \"90c877d1-b901-4265-a20e-28cb2f8cd133\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.105010 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90c877d1-b901-4265-a20e-28cb2f8cd133-logs" (OuterVolumeSpecName: "logs") pod "90c877d1-b901-4265-a20e-28cb2f8cd133" (UID: "90c877d1-b901-4265-a20e-28cb2f8cd133"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.105119 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-config-data\") pod \"90c877d1-b901-4265-a20e-28cb2f8cd133\" (UID: \"90c877d1-b901-4265-a20e-28cb2f8cd133\") " Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.105981 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90c877d1-b901-4265-a20e-28cb2f8cd133-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.108049 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90c877d1-b901-4265-a20e-28cb2f8cd133-kube-api-access-j8kf7" (OuterVolumeSpecName: "kube-api-access-j8kf7") pod "90c877d1-b901-4265-a20e-28cb2f8cd133" (UID: "90c877d1-b901-4265-a20e-28cb2f8cd133"). InnerVolumeSpecName "kube-api-access-j8kf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.130645 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90c877d1-b901-4265-a20e-28cb2f8cd133" (UID: "90c877d1-b901-4265-a20e-28cb2f8cd133"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.149928 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-config-data" (OuterVolumeSpecName: "config-data") pod "90c877d1-b901-4265-a20e-28cb2f8cd133" (UID: "90c877d1-b901-4265-a20e-28cb2f8cd133"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.162546 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "90c877d1-b901-4265-a20e-28cb2f8cd133" (UID: "90c877d1-b901-4265-a20e-28cb2f8cd133"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.207696 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.207724 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.207735 4711 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/90c877d1-b901-4265-a20e-28cb2f8cd133-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.207745 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8kf7\" (UniqueName: \"kubernetes.io/projected/90c877d1-b901-4265-a20e-28cb2f8cd133-kube-api-access-j8kf7\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.940151 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90c877d1-b901-4265-a20e-28cb2f8cd133","Type":"ContainerDied","Data":"18d8f8a2e50e7720cfdbbbae77fc2a6657e101b23f6236d42801784efe5b05f9"} Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.940209 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.940425 4711 scope.go:117] "RemoveContainer" containerID="9eff5131cca5df485ae9728b6254e41988f854c85be8be05c318e8350292a096" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.944087 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2","Type":"ContainerStarted","Data":"61324309980af74773f24956d9e46d95ff7b9911705c7fb949ce3dda25c51e28"} Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.944134 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2","Type":"ContainerStarted","Data":"784d03ce0c8de9b970ca9b5e9b42b0c50fc640dacae972ce5f62a8ac75037920"} Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.944182 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.946135 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="de4defe9-4bbb-4c3a-be60-26186fc9d170" containerName="nova-scheduler-scheduler" containerID="cri-o://47c3ea9bb9a36334387b4abd55aed24a87616c816ffd361cc3deb09f328c838b" gracePeriod=30 Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.963514 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.963498088 podStartE2EDuration="2.963498088s" podCreationTimestamp="2025-12-02 10:34:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:56.961798512 +0000 UTC m=+1286.671164979" watchObservedRunningTime="2025-12-02 10:34:56.963498088 +0000 UTC m=+1286.672864545" Dec 02 10:34:56 crc kubenswrapper[4711]: I1202 10:34:56.972822 4711 scope.go:117] "RemoveContainer" containerID="64e0295aed59adfe32348110ec3882104b7d6a34fe306f2956e3a6e20e7b1226" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.009018 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.015814 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.032009 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:57 crc kubenswrapper[4711]: E1202 10:34:57.032459 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90c877d1-b901-4265-a20e-28cb2f8cd133" containerName="nova-metadata-metadata" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.032476 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c877d1-b901-4265-a20e-28cb2f8cd133" containerName="nova-metadata-metadata" Dec 02 10:34:57 crc kubenswrapper[4711]: E1202 10:34:57.032491 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b309e7-82c2-4d00-9f80-ff4789ddd307" containerName="dnsmasq-dns" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.032497 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b309e7-82c2-4d00-9f80-ff4789ddd307" containerName="dnsmasq-dns" Dec 02 10:34:57 crc kubenswrapper[4711]: E1202 10:34:57.032523 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90c877d1-b901-4265-a20e-28cb2f8cd133" containerName="nova-metadata-log" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.032529 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c877d1-b901-4265-a20e-28cb2f8cd133" containerName="nova-metadata-log" Dec 02 10:34:57 crc kubenswrapper[4711]: E1202 10:34:57.032543 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b309e7-82c2-4d00-9f80-ff4789ddd307" containerName="init" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.032549 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b309e7-82c2-4d00-9f80-ff4789ddd307" containerName="init" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.032748 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="90c877d1-b901-4265-a20e-28cb2f8cd133" containerName="nova-metadata-log" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.032770 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="90c877d1-b901-4265-a20e-28cb2f8cd133" containerName="nova-metadata-metadata" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.032784 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b309e7-82c2-4d00-9f80-ff4789ddd307" containerName="dnsmasq-dns" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.033811 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.036479 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.036709 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.044221 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.093715 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90c877d1-b901-4265-a20e-28cb2f8cd133" path="/var/lib/kubelet/pods/90c877d1-b901-4265-a20e-28cb2f8cd133/volumes" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.094326 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b309e7-82c2-4d00-9f80-ff4789ddd307" path="/var/lib/kubelet/pods/f4b309e7-82c2-4d00-9f80-ff4789ddd307/volumes" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.126338 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-config-data\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.126445 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df7e62eb-715e-43c5-af99-1a9620eeb8b6-logs\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.126504 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.126541 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.126588 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2wqz\" (UniqueName: \"kubernetes.io/projected/df7e62eb-715e-43c5-af99-1a9620eeb8b6-kube-api-access-v2wqz\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.228016 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df7e62eb-715e-43c5-af99-1a9620eeb8b6-logs\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.228101 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.228134 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.228176 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2wqz\" (UniqueName: \"kubernetes.io/projected/df7e62eb-715e-43c5-af99-1a9620eeb8b6-kube-api-access-v2wqz\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.228233 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-config-data\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.228564 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df7e62eb-715e-43c5-af99-1a9620eeb8b6-logs\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.233471 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.233564 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-config-data\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.234007 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.247789 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2wqz\" (UniqueName: \"kubernetes.io/projected/df7e62eb-715e-43c5-af99-1a9620eeb8b6-kube-api-access-v2wqz\") pod \"nova-metadata-0\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.372658 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.856082 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:34:57 crc kubenswrapper[4711]: I1202 10:34:57.958667 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df7e62eb-715e-43c5-af99-1a9620eeb8b6","Type":"ContainerStarted","Data":"596e1b33810a760510ce92e2acd7373005a6380fc95e1328e1c43e5c0535b82f"} Dec 02 10:34:58 crc kubenswrapper[4711]: I1202 10:34:58.974753 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df7e62eb-715e-43c5-af99-1a9620eeb8b6","Type":"ContainerStarted","Data":"5a4c7d474181d001c638bb6d4be556d96d9723143fac14c610ed7a43facb3316"} Dec 02 10:34:58 crc kubenswrapper[4711]: I1202 10:34:58.975226 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df7e62eb-715e-43c5-af99-1a9620eeb8b6","Type":"ContainerStarted","Data":"bc4949427ca0b6989ec324573b574190e876c1172966ea72738dd24c11e6332c"} Dec 02 10:34:59 crc kubenswrapper[4711]: I1202 10:34:59.004113 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.004080076 podStartE2EDuration="3.004080076s" podCreationTimestamp="2025-12-02 10:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:58.996359598 +0000 UTC m=+1288.705726075" watchObservedRunningTime="2025-12-02 10:34:59.004080076 +0000 UTC m=+1288.713446553" Dec 02 10:34:59 crc kubenswrapper[4711]: E1202 10:34:59.179618 4711 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="47c3ea9bb9a36334387b4abd55aed24a87616c816ffd361cc3deb09f328c838b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 10:34:59 crc kubenswrapper[4711]: E1202 10:34:59.182115 4711 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="47c3ea9bb9a36334387b4abd55aed24a87616c816ffd361cc3deb09f328c838b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 10:34:59 crc kubenswrapper[4711]: E1202 10:34:59.184279 4711 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="47c3ea9bb9a36334387b4abd55aed24a87616c816ffd361cc3deb09f328c838b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 10:34:59 crc kubenswrapper[4711]: E1202 10:34:59.184344 4711 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="de4defe9-4bbb-4c3a-be60-26186fc9d170" containerName="nova-scheduler-scheduler" Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.009332 4711 generic.go:334] "Generic (PLEG): container finished" podID="de4defe9-4bbb-4c3a-be60-26186fc9d170" containerID="47c3ea9bb9a36334387b4abd55aed24a87616c816ffd361cc3deb09f328c838b" exitCode=0 Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.009418 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"de4defe9-4bbb-4c3a-be60-26186fc9d170","Type":"ContainerDied","Data":"47c3ea9bb9a36334387b4abd55aed24a87616c816ffd361cc3deb09f328c838b"} Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.265548 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.290795 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4defe9-4bbb-4c3a-be60-26186fc9d170-config-data\") pod \"de4defe9-4bbb-4c3a-be60-26186fc9d170\" (UID: \"de4defe9-4bbb-4c3a-be60-26186fc9d170\") " Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.290894 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4defe9-4bbb-4c3a-be60-26186fc9d170-combined-ca-bundle\") pod \"de4defe9-4bbb-4c3a-be60-26186fc9d170\" (UID: \"de4defe9-4bbb-4c3a-be60-26186fc9d170\") " Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.290979 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzqrh\" (UniqueName: \"kubernetes.io/projected/de4defe9-4bbb-4c3a-be60-26186fc9d170-kube-api-access-xzqrh\") pod \"de4defe9-4bbb-4c3a-be60-26186fc9d170\" (UID: \"de4defe9-4bbb-4c3a-be60-26186fc9d170\") " Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.303845 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4defe9-4bbb-4c3a-be60-26186fc9d170-kube-api-access-xzqrh" (OuterVolumeSpecName: "kube-api-access-xzqrh") pod "de4defe9-4bbb-4c3a-be60-26186fc9d170" (UID: "de4defe9-4bbb-4c3a-be60-26186fc9d170"). InnerVolumeSpecName "kube-api-access-xzqrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.327564 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4defe9-4bbb-4c3a-be60-26186fc9d170-config-data" (OuterVolumeSpecName: "config-data") pod "de4defe9-4bbb-4c3a-be60-26186fc9d170" (UID: "de4defe9-4bbb-4c3a-be60-26186fc9d170"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.352258 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4defe9-4bbb-4c3a-be60-26186fc9d170-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de4defe9-4bbb-4c3a-be60-26186fc9d170" (UID: "de4defe9-4bbb-4c3a-be60-26186fc9d170"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.394523 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4defe9-4bbb-4c3a-be60-26186fc9d170-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.394560 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4defe9-4bbb-4c3a-be60-26186fc9d170-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.394577 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzqrh\" (UniqueName: \"kubernetes.io/projected/de4defe9-4bbb-4c3a-be60-26186fc9d170-kube-api-access-xzqrh\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:00 crc kubenswrapper[4711]: I1202 10:35:00.769711 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.020999 4711 generic.go:334] "Generic (PLEG): container finished" podID="b51af35f-486c-4afc-9aa9-571d90f285fd" containerID="0db7e7797d9ddbc0d6e550d7e660b2f8b2262d100557ff87f9106fb05ef77873" exitCode=0 Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.021253 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b51af35f-486c-4afc-9aa9-571d90f285fd","Type":"ContainerDied","Data":"0db7e7797d9ddbc0d6e550d7e660b2f8b2262d100557ff87f9106fb05ef77873"} Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.022785 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"de4defe9-4bbb-4c3a-be60-26186fc9d170","Type":"ContainerDied","Data":"3f0ca24e9903093cc73487b115a7bd89467c177fd97bed8f81c4f13f173b396e"} Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.022811 4711 scope.go:117] "RemoveContainer" containerID="47c3ea9bb9a36334387b4abd55aed24a87616c816ffd361cc3deb09f328c838b" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.022930 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.058717 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.072742 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.095458 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de4defe9-4bbb-4c3a-be60-26186fc9d170" path="/var/lib/kubelet/pods/de4defe9-4bbb-4c3a-be60-26186fc9d170/volumes" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.096316 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:35:01 crc kubenswrapper[4711]: E1202 10:35:01.096743 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4defe9-4bbb-4c3a-be60-26186fc9d170" containerName="nova-scheduler-scheduler" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.096759 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4defe9-4bbb-4c3a-be60-26186fc9d170" containerName="nova-scheduler-scheduler" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.097015 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="de4defe9-4bbb-4c3a-be60-26186fc9d170" containerName="nova-scheduler-scheduler" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.098649 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.112347 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.116110 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.161063 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.216890 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51af35f-486c-4afc-9aa9-571d90f285fd-config-data\") pod \"b51af35f-486c-4afc-9aa9-571d90f285fd\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.217023 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b51af35f-486c-4afc-9aa9-571d90f285fd-logs\") pod \"b51af35f-486c-4afc-9aa9-571d90f285fd\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.217114 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zc5q\" (UniqueName: \"kubernetes.io/projected/b51af35f-486c-4afc-9aa9-571d90f285fd-kube-api-access-6zc5q\") pod \"b51af35f-486c-4afc-9aa9-571d90f285fd\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.217198 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51af35f-486c-4afc-9aa9-571d90f285fd-combined-ca-bundle\") pod \"b51af35f-486c-4afc-9aa9-571d90f285fd\" (UID: \"b51af35f-486c-4afc-9aa9-571d90f285fd\") " Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.217552 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjk2l\" (UniqueName: \"kubernetes.io/projected/d5a70ce3-0321-415c-8b3a-8b7cea271106-kube-api-access-rjk2l\") pod \"nova-scheduler-0\" (UID: \"d5a70ce3-0321-415c-8b3a-8b7cea271106\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.217559 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b51af35f-486c-4afc-9aa9-571d90f285fd-logs" (OuterVolumeSpecName: "logs") pod "b51af35f-486c-4afc-9aa9-571d90f285fd" (UID: "b51af35f-486c-4afc-9aa9-571d90f285fd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.217602 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a70ce3-0321-415c-8b3a-8b7cea271106-config-data\") pod \"nova-scheduler-0\" (UID: \"d5a70ce3-0321-415c-8b3a-8b7cea271106\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.217627 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a70ce3-0321-415c-8b3a-8b7cea271106-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5a70ce3-0321-415c-8b3a-8b7cea271106\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.217789 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b51af35f-486c-4afc-9aa9-571d90f285fd-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.224355 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51af35f-486c-4afc-9aa9-571d90f285fd-kube-api-access-6zc5q" (OuterVolumeSpecName: "kube-api-access-6zc5q") pod "b51af35f-486c-4afc-9aa9-571d90f285fd" (UID: "b51af35f-486c-4afc-9aa9-571d90f285fd"). InnerVolumeSpecName "kube-api-access-6zc5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.246972 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51af35f-486c-4afc-9aa9-571d90f285fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b51af35f-486c-4afc-9aa9-571d90f285fd" (UID: "b51af35f-486c-4afc-9aa9-571d90f285fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.256754 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51af35f-486c-4afc-9aa9-571d90f285fd-config-data" (OuterVolumeSpecName: "config-data") pod "b51af35f-486c-4afc-9aa9-571d90f285fd" (UID: "b51af35f-486c-4afc-9aa9-571d90f285fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.318760 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjk2l\" (UniqueName: \"kubernetes.io/projected/d5a70ce3-0321-415c-8b3a-8b7cea271106-kube-api-access-rjk2l\") pod \"nova-scheduler-0\" (UID: \"d5a70ce3-0321-415c-8b3a-8b7cea271106\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.318800 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a70ce3-0321-415c-8b3a-8b7cea271106-config-data\") pod \"nova-scheduler-0\" (UID: \"d5a70ce3-0321-415c-8b3a-8b7cea271106\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.318821 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a70ce3-0321-415c-8b3a-8b7cea271106-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5a70ce3-0321-415c-8b3a-8b7cea271106\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.318940 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51af35f-486c-4afc-9aa9-571d90f285fd-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.318964 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zc5q\" (UniqueName: \"kubernetes.io/projected/b51af35f-486c-4afc-9aa9-571d90f285fd-kube-api-access-6zc5q\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.318975 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51af35f-486c-4afc-9aa9-571d90f285fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.323409 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a70ce3-0321-415c-8b3a-8b7cea271106-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5a70ce3-0321-415c-8b3a-8b7cea271106\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.323415 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a70ce3-0321-415c-8b3a-8b7cea271106-config-data\") pod \"nova-scheduler-0\" (UID: \"d5a70ce3-0321-415c-8b3a-8b7cea271106\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.339175 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjk2l\" (UniqueName: \"kubernetes.io/projected/d5a70ce3-0321-415c-8b3a-8b7cea271106-kube-api-access-rjk2l\") pod \"nova-scheduler-0\" (UID: \"d5a70ce3-0321-415c-8b3a-8b7cea271106\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.470415 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:35:01 crc kubenswrapper[4711]: W1202 10:35:01.958111 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5a70ce3_0321_415c_8b3a_8b7cea271106.slice/crio-c791f8a443ea3493fe91f933490380bf910e00fd2afff5caeff0bdbec04546a1 WatchSource:0}: Error finding container c791f8a443ea3493fe91f933490380bf910e00fd2afff5caeff0bdbec04546a1: Status 404 returned error can't find the container with id c791f8a443ea3493fe91f933490380bf910e00fd2afff5caeff0bdbec04546a1 Dec 02 10:35:01 crc kubenswrapper[4711]: I1202 10:35:01.960721 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.042358 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5a70ce3-0321-415c-8b3a-8b7cea271106","Type":"ContainerStarted","Data":"c791f8a443ea3493fe91f933490380bf910e00fd2afff5caeff0bdbec04546a1"} Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.045768 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b51af35f-486c-4afc-9aa9-571d90f285fd","Type":"ContainerDied","Data":"dcb4416a85b86e54766cf85294bc6f339be04c9bd3db8d3e62c5484bb7d8665d"} Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.045809 4711 scope.go:117] "RemoveContainer" containerID="0db7e7797d9ddbc0d6e550d7e660b2f8b2262d100557ff87f9106fb05ef77873" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.045942 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.116235 4711 scope.go:117] "RemoveContainer" containerID="5869783373d879c8c8a3287eee8ea95faea9a64572cc3d8a653bc93602a2df71" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.120143 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.195358 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.206754 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:02 crc kubenswrapper[4711]: E1202 10:35:02.207569 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51af35f-486c-4afc-9aa9-571d90f285fd" containerName="nova-api-api" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.207589 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51af35f-486c-4afc-9aa9-571d90f285fd" containerName="nova-api-api" Dec 02 10:35:02 crc kubenswrapper[4711]: E1202 10:35:02.207650 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51af35f-486c-4afc-9aa9-571d90f285fd" containerName="nova-api-log" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.207662 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51af35f-486c-4afc-9aa9-571d90f285fd" containerName="nova-api-log" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.207910 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51af35f-486c-4afc-9aa9-571d90f285fd" containerName="nova-api-api" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.207962 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51af35f-486c-4afc-9aa9-571d90f285fd" containerName="nova-api-log" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.209705 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.212139 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.218156 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.243627 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50e76b45-5c39-4333-9993-421478cdf150-logs\") pod \"nova-api-0\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.243705 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4zz\" (UniqueName: \"kubernetes.io/projected/50e76b45-5c39-4333-9993-421478cdf150-kube-api-access-kg4zz\") pod \"nova-api-0\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.243752 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e76b45-5c39-4333-9993-421478cdf150-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.243794 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e76b45-5c39-4333-9993-421478cdf150-config-data\") pod \"nova-api-0\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: E1202 10:35:02.252208 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bbf095c_359d_4e14_95e8_d75e57a7f7c2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb51af35f_486c_4afc_9aa9_571d90f285fd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb51af35f_486c_4afc_9aa9_571d90f285fd.slice/crio-dcb4416a85b86e54766cf85294bc6f339be04c9bd3db8d3e62c5484bb7d8665d\": RecentStats: unable to find data in memory cache]" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.345877 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e76b45-5c39-4333-9993-421478cdf150-config-data\") pod \"nova-api-0\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.346252 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50e76b45-5c39-4333-9993-421478cdf150-logs\") pod \"nova-api-0\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.346315 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4zz\" (UniqueName: \"kubernetes.io/projected/50e76b45-5c39-4333-9993-421478cdf150-kube-api-access-kg4zz\") pod \"nova-api-0\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.346372 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e76b45-5c39-4333-9993-421478cdf150-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.346813 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50e76b45-5c39-4333-9993-421478cdf150-logs\") pod \"nova-api-0\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.350135 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e76b45-5c39-4333-9993-421478cdf150-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.355698 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e76b45-5c39-4333-9993-421478cdf150-config-data\") pod \"nova-api-0\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.363535 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4zz\" (UniqueName: \"kubernetes.io/projected/50e76b45-5c39-4333-9993-421478cdf150-kube-api-access-kg4zz\") pod \"nova-api-0\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " pod="openstack/nova-api-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.375066 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.375302 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 10:35:02 crc kubenswrapper[4711]: I1202 10:35:02.539856 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:35:03 crc kubenswrapper[4711]: W1202 10:35:03.039066 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50e76b45_5c39_4333_9993_421478cdf150.slice/crio-2045b8048c50c84b72ceb23bda9702d41c760a4843940be6056053b3eed4a57a WatchSource:0}: Error finding container 2045b8048c50c84b72ceb23bda9702d41c760a4843940be6056053b3eed4a57a: Status 404 returned error can't find the container with id 2045b8048c50c84b72ceb23bda9702d41c760a4843940be6056053b3eed4a57a Dec 02 10:35:03 crc kubenswrapper[4711]: I1202 10:35:03.042240 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:03 crc kubenswrapper[4711]: I1202 10:35:03.059025 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50e76b45-5c39-4333-9993-421478cdf150","Type":"ContainerStarted","Data":"2045b8048c50c84b72ceb23bda9702d41c760a4843940be6056053b3eed4a57a"} Dec 02 10:35:03 crc kubenswrapper[4711]: I1202 10:35:03.061782 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5a70ce3-0321-415c-8b3a-8b7cea271106","Type":"ContainerStarted","Data":"4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad"} Dec 02 10:35:03 crc kubenswrapper[4711]: I1202 10:35:03.082662 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.082641944 podStartE2EDuration="2.082641944s" podCreationTimestamp="2025-12-02 10:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:03.081519713 +0000 UTC m=+1292.790886160" watchObservedRunningTime="2025-12-02 10:35:03.082641944 +0000 UTC m=+1292.792008391" Dec 02 10:35:03 crc kubenswrapper[4711]: I1202 10:35:03.092122 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51af35f-486c-4afc-9aa9-571d90f285fd" path="/var/lib/kubelet/pods/b51af35f-486c-4afc-9aa9-571d90f285fd/volumes" Dec 02 10:35:04 crc kubenswrapper[4711]: I1202 10:35:04.075787 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50e76b45-5c39-4333-9993-421478cdf150","Type":"ContainerStarted","Data":"b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353"} Dec 02 10:35:04 crc kubenswrapper[4711]: I1202 10:35:04.076128 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50e76b45-5c39-4333-9993-421478cdf150","Type":"ContainerStarted","Data":"6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e"} Dec 02 10:35:04 crc kubenswrapper[4711]: I1202 10:35:04.103988 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.103940856 podStartE2EDuration="2.103940856s" podCreationTimestamp="2025-12-02 10:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:04.094851009 +0000 UTC m=+1293.804217496" watchObservedRunningTime="2025-12-02 10:35:04.103940856 +0000 UTC m=+1293.813307323" Dec 02 10:35:04 crc kubenswrapper[4711]: I1202 10:35:04.411963 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:35:04 crc kubenswrapper[4711]: I1202 10:35:04.412554 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed" containerName="kube-state-metrics" containerID="cri-o://7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e" gracePeriod=30 Dec 02 10:35:04 crc kubenswrapper[4711]: I1202 10:35:04.969775 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 10:35:04 crc kubenswrapper[4711]: I1202 10:35:04.999195 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjvm5\" (UniqueName: \"kubernetes.io/projected/d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed-kube-api-access-wjvm5\") pod \"d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed\" (UID: \"d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed\") " Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.006597 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed-kube-api-access-wjvm5" (OuterVolumeSpecName: "kube-api-access-wjvm5") pod "d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed" (UID: "d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed"). InnerVolumeSpecName "kube-api-access-wjvm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.085141 4711 generic.go:334] "Generic (PLEG): container finished" podID="d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed" containerID="7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e" exitCode=2 Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.085304 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.096325 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed","Type":"ContainerDied","Data":"7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e"} Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.096365 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed","Type":"ContainerDied","Data":"1bfe1049f953291c544614d722f12c36129b8201579f3f3b2d3227ff6563e3e6"} Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.096381 4711 scope.go:117] "RemoveContainer" containerID="7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.102859 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjvm5\" (UniqueName: \"kubernetes.io/projected/d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed-kube-api-access-wjvm5\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.136417 4711 scope.go:117] "RemoveContainer" containerID="7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e" Dec 02 10:35:05 crc kubenswrapper[4711]: E1202 10:35:05.137179 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e\": container with ID starting with 7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e not found: ID does not exist" containerID="7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.137213 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e"} err="failed to get container status \"7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e\": rpc error: code = NotFound desc = could not find container \"7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e\": container with ID starting with 7b8867c5e30e4b60a366455e215247c145ae7369d7bde45024d87e5cdd7edd9e not found: ID does not exist" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.161019 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.182113 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.194077 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:35:05 crc kubenswrapper[4711]: E1202 10:35:05.194773 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed" containerName="kube-state-metrics" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.194798 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed" containerName="kube-state-metrics" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.195062 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed" containerName="kube-state-metrics" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.195663 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.197702 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.197745 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.202583 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.307843 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpb5l\" (UniqueName: \"kubernetes.io/projected/6abaf105-a517-42d9-86c4-5e6cd5527b94-kube-api-access-zpb5l\") pod \"kube-state-metrics-0\" (UID: \"6abaf105-a517-42d9-86c4-5e6cd5527b94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.307898 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6abaf105-a517-42d9-86c4-5e6cd5527b94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6abaf105-a517-42d9-86c4-5e6cd5527b94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.307918 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6abaf105-a517-42d9-86c4-5e6cd5527b94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6abaf105-a517-42d9-86c4-5e6cd5527b94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.307999 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6abaf105-a517-42d9-86c4-5e6cd5527b94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6abaf105-a517-42d9-86c4-5e6cd5527b94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.321074 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.409731 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6abaf105-a517-42d9-86c4-5e6cd5527b94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6abaf105-a517-42d9-86c4-5e6cd5527b94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.409850 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpb5l\" (UniqueName: \"kubernetes.io/projected/6abaf105-a517-42d9-86c4-5e6cd5527b94-kube-api-access-zpb5l\") pod \"kube-state-metrics-0\" (UID: \"6abaf105-a517-42d9-86c4-5e6cd5527b94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.409890 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6abaf105-a517-42d9-86c4-5e6cd5527b94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6abaf105-a517-42d9-86c4-5e6cd5527b94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.409915 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6abaf105-a517-42d9-86c4-5e6cd5527b94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6abaf105-a517-42d9-86c4-5e6cd5527b94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.414928 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6abaf105-a517-42d9-86c4-5e6cd5527b94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6abaf105-a517-42d9-86c4-5e6cd5527b94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.419512 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6abaf105-a517-42d9-86c4-5e6cd5527b94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6abaf105-a517-42d9-86c4-5e6cd5527b94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.422570 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6abaf105-a517-42d9-86c4-5e6cd5527b94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6abaf105-a517-42d9-86c4-5e6cd5527b94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.428759 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpb5l\" (UniqueName: \"kubernetes.io/projected/6abaf105-a517-42d9-86c4-5e6cd5527b94-kube-api-access-zpb5l\") pod \"kube-state-metrics-0\" (UID: \"6abaf105-a517-42d9-86c4-5e6cd5527b94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.511247 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.943775 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:35:05 crc kubenswrapper[4711]: W1202 10:35:05.949495 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6abaf105_a517_42d9_86c4_5e6cd5527b94.slice/crio-a537770fc2a1ebb9d388b5a83b940ca66ff91711bd91f9afb474ec58c4c7131a WatchSource:0}: Error finding container a537770fc2a1ebb9d388b5a83b940ca66ff91711bd91f9afb474ec58c4c7131a: Status 404 returned error can't find the container with id a537770fc2a1ebb9d388b5a83b940ca66ff91711bd91f9afb474ec58c4c7131a Dec 02 10:35:05 crc kubenswrapper[4711]: I1202 10:35:05.954192 4711 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:35:06 crc kubenswrapper[4711]: I1202 10:35:06.100696 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6abaf105-a517-42d9-86c4-5e6cd5527b94","Type":"ContainerStarted","Data":"a537770fc2a1ebb9d388b5a83b940ca66ff91711bd91f9afb474ec58c4c7131a"} Dec 02 10:35:06 crc kubenswrapper[4711]: I1202 10:35:06.196989 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:06 crc kubenswrapper[4711]: I1202 10:35:06.197317 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="ceilometer-central-agent" containerID="cri-o://0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85" gracePeriod=30 Dec 02 10:35:06 crc kubenswrapper[4711]: I1202 10:35:06.197596 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="proxy-httpd" containerID="cri-o://1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e" gracePeriod=30 Dec 02 10:35:06 crc kubenswrapper[4711]: I1202 10:35:06.197763 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="sg-core" containerID="cri-o://0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d" gracePeriod=30 Dec 02 10:35:06 crc kubenswrapper[4711]: I1202 10:35:06.197839 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="ceilometer-notification-agent" containerID="cri-o://ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480" gracePeriod=30 Dec 02 10:35:06 crc kubenswrapper[4711]: I1202 10:35:06.470531 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 10:35:07 crc kubenswrapper[4711]: I1202 10:35:07.089260 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed" path="/var/lib/kubelet/pods/d5f196a7-6e9f-4574-8dda-07ee9b4fd4ed/volumes" Dec 02 10:35:07 crc kubenswrapper[4711]: I1202 10:35:07.116542 4711 generic.go:334] "Generic (PLEG): container finished" podID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerID="1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e" exitCode=0 Dec 02 10:35:07 crc kubenswrapper[4711]: I1202 10:35:07.116587 4711 generic.go:334] "Generic (PLEG): container finished" podID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerID="0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d" exitCode=2 Dec 02 10:35:07 crc kubenswrapper[4711]: I1202 10:35:07.116596 4711 generic.go:334] "Generic (PLEG): container finished" podID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerID="0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85" exitCode=0 Dec 02 10:35:07 crc kubenswrapper[4711]: I1202 10:35:07.116627 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faca9a74-9df6-451f-b654-dfa3bfb71eda","Type":"ContainerDied","Data":"1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e"} Dec 02 10:35:07 crc kubenswrapper[4711]: I1202 10:35:07.116676 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faca9a74-9df6-451f-b654-dfa3bfb71eda","Type":"ContainerDied","Data":"0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d"} Dec 02 10:35:07 crc kubenswrapper[4711]: I1202 10:35:07.116693 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faca9a74-9df6-451f-b654-dfa3bfb71eda","Type":"ContainerDied","Data":"0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85"} Dec 02 10:35:07 crc kubenswrapper[4711]: I1202 10:35:07.118265 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6abaf105-a517-42d9-86c4-5e6cd5527b94","Type":"ContainerStarted","Data":"75abd03442606743cd11c43f8b35a4e8e8a9e9d17603601bc3447ee6a63bd0ff"} Dec 02 10:35:07 crc kubenswrapper[4711]: I1202 10:35:07.118455 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 02 10:35:07 crc kubenswrapper[4711]: I1202 10:35:07.137928 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.707137929 podStartE2EDuration="2.137906019s" podCreationTimestamp="2025-12-02 10:35:05 +0000 UTC" firstStartedPulling="2025-12-02 10:35:05.953873415 +0000 UTC m=+1295.663239862" lastFinishedPulling="2025-12-02 10:35:06.384641495 +0000 UTC m=+1296.094007952" observedRunningTime="2025-12-02 10:35:07.135313719 +0000 UTC m=+1296.844680176" watchObservedRunningTime="2025-12-02 10:35:07.137906019 +0000 UTC m=+1296.847272476" Dec 02 10:35:07 crc kubenswrapper[4711]: I1202 10:35:07.374397 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 10:35:07 crc kubenswrapper[4711]: I1202 10:35:07.374746 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 10:35:08 crc kubenswrapper[4711]: I1202 10:35:08.398411 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 10:35:08 crc kubenswrapper[4711]: I1202 10:35:08.398802 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.079413 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.108925 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-sg-core-conf-yaml\") pod \"faca9a74-9df6-451f-b654-dfa3bfb71eda\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.109131 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v796r\" (UniqueName: \"kubernetes.io/projected/faca9a74-9df6-451f-b654-dfa3bfb71eda-kube-api-access-v796r\") pod \"faca9a74-9df6-451f-b654-dfa3bfb71eda\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.109224 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-scripts\") pod \"faca9a74-9df6-451f-b654-dfa3bfb71eda\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.109308 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-config-data\") pod \"faca9a74-9df6-451f-b654-dfa3bfb71eda\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.109399 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-combined-ca-bundle\") pod \"faca9a74-9df6-451f-b654-dfa3bfb71eda\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.109444 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faca9a74-9df6-451f-b654-dfa3bfb71eda-log-httpd\") pod \"faca9a74-9df6-451f-b654-dfa3bfb71eda\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.109555 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faca9a74-9df6-451f-b654-dfa3bfb71eda-run-httpd\") pod \"faca9a74-9df6-451f-b654-dfa3bfb71eda\" (UID: \"faca9a74-9df6-451f-b654-dfa3bfb71eda\") " Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.110808 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faca9a74-9df6-451f-b654-dfa3bfb71eda-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "faca9a74-9df6-451f-b654-dfa3bfb71eda" (UID: "faca9a74-9df6-451f-b654-dfa3bfb71eda"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.110895 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faca9a74-9df6-451f-b654-dfa3bfb71eda-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "faca9a74-9df6-451f-b654-dfa3bfb71eda" (UID: "faca9a74-9df6-451f-b654-dfa3bfb71eda"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.111392 4711 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faca9a74-9df6-451f-b654-dfa3bfb71eda-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.111430 4711 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faca9a74-9df6-451f-b654-dfa3bfb71eda-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.138131 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-scripts" (OuterVolumeSpecName: "scripts") pod "faca9a74-9df6-451f-b654-dfa3bfb71eda" (UID: "faca9a74-9df6-451f-b654-dfa3bfb71eda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.151873 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faca9a74-9df6-451f-b654-dfa3bfb71eda-kube-api-access-v796r" (OuterVolumeSpecName: "kube-api-access-v796r") pod "faca9a74-9df6-451f-b654-dfa3bfb71eda" (UID: "faca9a74-9df6-451f-b654-dfa3bfb71eda"). InnerVolumeSpecName "kube-api-access-v796r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.162876 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "faca9a74-9df6-451f-b654-dfa3bfb71eda" (UID: "faca9a74-9df6-451f-b654-dfa3bfb71eda"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.176462 4711 generic.go:334] "Generic (PLEG): container finished" podID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerID="ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480" exitCode=0 Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.176510 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faca9a74-9df6-451f-b654-dfa3bfb71eda","Type":"ContainerDied","Data":"ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480"} Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.176596 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.176645 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faca9a74-9df6-451f-b654-dfa3bfb71eda","Type":"ContainerDied","Data":"2e04efc3a82a9ce6239d2c64fa4971b736d4f1d13fa5db6c949b7d2dc75b2fdc"} Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.176705 4711 scope.go:117] "RemoveContainer" containerID="1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.206470 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faca9a74-9df6-451f-b654-dfa3bfb71eda" (UID: "faca9a74-9df6-451f-b654-dfa3bfb71eda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.216146 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v796r\" (UniqueName: \"kubernetes.io/projected/faca9a74-9df6-451f-b654-dfa3bfb71eda-kube-api-access-v796r\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.216183 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.216196 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.216208 4711 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.221241 4711 scope.go:117] "RemoveContainer" containerID="0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.232675 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-config-data" (OuterVolumeSpecName: "config-data") pod "faca9a74-9df6-451f-b654-dfa3bfb71eda" (UID: "faca9a74-9df6-451f-b654-dfa3bfb71eda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.241923 4711 scope.go:117] "RemoveContainer" containerID="ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.263870 4711 scope.go:117] "RemoveContainer" containerID="0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.287953 4711 scope.go:117] "RemoveContainer" containerID="1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e" Dec 02 10:35:10 crc kubenswrapper[4711]: E1202 10:35:10.288453 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e\": container with ID starting with 1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e not found: ID does not exist" containerID="1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.288504 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e"} err="failed to get container status \"1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e\": rpc error: code = NotFound desc = could not find container \"1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e\": container with ID starting with 1881faca9101da5b6b5781e76d2b47520260fc96ddcb227c83d415a803118b9e not found: ID does not exist" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.288538 4711 scope.go:117] "RemoveContainer" containerID="0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d" Dec 02 10:35:10 crc kubenswrapper[4711]: E1202 10:35:10.289076 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d\": container with ID starting with 0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d not found: ID does not exist" containerID="0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.289124 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d"} err="failed to get container status \"0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d\": rpc error: code = NotFound desc = could not find container \"0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d\": container with ID starting with 0cdd63a802c5240920c6c9ba30d4f186e76424bca273d63f9a7c40087f2fdf6d not found: ID does not exist" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.289155 4711 scope.go:117] "RemoveContainer" containerID="ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480" Dec 02 10:35:10 crc kubenswrapper[4711]: E1202 10:35:10.289576 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480\": container with ID starting with ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480 not found: ID does not exist" containerID="ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.289602 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480"} err="failed to get container status \"ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480\": rpc error: code = NotFound desc = could not find container \"ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480\": container with ID starting with ced481b619c312f0f475720dbbf3cc67dc966bb43065068876fd61abc0ce5480 not found: ID does not exist" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.289621 4711 scope.go:117] "RemoveContainer" containerID="0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85" Dec 02 10:35:10 crc kubenswrapper[4711]: E1202 10:35:10.289962 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85\": container with ID starting with 0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85 not found: ID does not exist" containerID="0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.289994 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85"} err="failed to get container status \"0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85\": rpc error: code = NotFound desc = could not find container \"0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85\": container with ID starting with 0408e14292c21cfe2148d665fafd9b7ac6f1172297b8916c0905bfe8ab85be85 not found: ID does not exist" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.318227 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faca9a74-9df6-451f-b654-dfa3bfb71eda-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.520771 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.531349 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.538531 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:10 crc kubenswrapper[4711]: E1202 10:35:10.538897 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="ceilometer-central-agent" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.538917 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="ceilometer-central-agent" Dec 02 10:35:10 crc kubenswrapper[4711]: E1202 10:35:10.538933 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="ceilometer-notification-agent" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.538939 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="ceilometer-notification-agent" Dec 02 10:35:10 crc kubenswrapper[4711]: E1202 10:35:10.538949 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="sg-core" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.538954 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="sg-core" Dec 02 10:35:10 crc kubenswrapper[4711]: E1202 10:35:10.538991 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="proxy-httpd" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.538997 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="proxy-httpd" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.539202 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="ceilometer-central-agent" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.539222 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="sg-core" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.539243 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="ceilometer-notification-agent" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.539252 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" containerName="proxy-httpd" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.540893 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.542928 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.542991 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.543319 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.574713 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.623156 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-config-data\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.623264 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40cf6557-cfcf-4840-b90d-f116950455b3-run-httpd\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.623331 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.623363 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.623435 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.623718 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40cf6557-cfcf-4840-b90d-f116950455b3-log-httpd\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.623936 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-scripts\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.624170 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm4db\" (UniqueName: \"kubernetes.io/projected/40cf6557-cfcf-4840-b90d-f116950455b3-kube-api-access-lm4db\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.726366 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40cf6557-cfcf-4840-b90d-f116950455b3-run-httpd\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.726661 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.726788 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.726917 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40cf6557-cfcf-4840-b90d-f116950455b3-run-httpd\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.726924 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.727086 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40cf6557-cfcf-4840-b90d-f116950455b3-log-httpd\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.727163 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-scripts\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.727223 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm4db\" (UniqueName: \"kubernetes.io/projected/40cf6557-cfcf-4840-b90d-f116950455b3-kube-api-access-lm4db\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.727300 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-config-data\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.728232 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40cf6557-cfcf-4840-b90d-f116950455b3-log-httpd\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.731864 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.732102 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.732494 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.733210 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-scripts\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.734849 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-config-data\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.743230 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm4db\" (UniqueName: \"kubernetes.io/projected/40cf6557-cfcf-4840-b90d-f116950455b3-kube-api-access-lm4db\") pod \"ceilometer-0\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " pod="openstack/ceilometer-0" Dec 02 10:35:10 crc kubenswrapper[4711]: I1202 10:35:10.862399 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:11 crc kubenswrapper[4711]: I1202 10:35:11.089748 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faca9a74-9df6-451f-b654-dfa3bfb71eda" path="/var/lib/kubelet/pods/faca9a74-9df6-451f-b654-dfa3bfb71eda/volumes" Dec 02 10:35:11 crc kubenswrapper[4711]: I1202 10:35:11.418958 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:11 crc kubenswrapper[4711]: I1202 10:35:11.470671 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 10:35:11 crc kubenswrapper[4711]: I1202 10:35:11.498647 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 10:35:12 crc kubenswrapper[4711]: I1202 10:35:12.217172 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40cf6557-cfcf-4840-b90d-f116950455b3","Type":"ContainerStarted","Data":"4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489"} Dec 02 10:35:12 crc kubenswrapper[4711]: I1202 10:35:12.217536 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40cf6557-cfcf-4840-b90d-f116950455b3","Type":"ContainerStarted","Data":"444ba9346b28b260aebe0589f098c025503e1c69d6b9b3538b945f7327818613"} Dec 02 10:35:12 crc kubenswrapper[4711]: I1202 10:35:12.248636 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 10:35:12 crc kubenswrapper[4711]: E1202 10:35:12.489836 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bbf095c_359d_4e14_95e8_d75e57a7f7c2.slice\": RecentStats: unable to find data in memory cache]" Dec 02 10:35:12 crc kubenswrapper[4711]: I1202 10:35:12.541057 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:35:12 crc kubenswrapper[4711]: I1202 10:35:12.541161 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:35:13 crc kubenswrapper[4711]: I1202 10:35:13.234092 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40cf6557-cfcf-4840-b90d-f116950455b3","Type":"ContainerStarted","Data":"ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2"} Dec 02 10:35:13 crc kubenswrapper[4711]: I1202 10:35:13.623313 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="50e76b45-5c39-4333-9993-421478cdf150" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 10:35:13 crc kubenswrapper[4711]: I1202 10:35:13.623305 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="50e76b45-5c39-4333-9993-421478cdf150" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 10:35:14 crc kubenswrapper[4711]: I1202 10:35:14.243760 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40cf6557-cfcf-4840-b90d-f116950455b3","Type":"ContainerStarted","Data":"e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e"} Dec 02 10:35:15 crc kubenswrapper[4711]: I1202 10:35:15.522506 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 02 10:35:16 crc kubenswrapper[4711]: I1202 10:35:16.273620 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40cf6557-cfcf-4840-b90d-f116950455b3","Type":"ContainerStarted","Data":"85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383"} Dec 02 10:35:16 crc kubenswrapper[4711]: I1202 10:35:16.275796 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:35:16 crc kubenswrapper[4711]: I1202 10:35:16.319569 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.598085227 podStartE2EDuration="6.319495219s" podCreationTimestamp="2025-12-02 10:35:10 +0000 UTC" firstStartedPulling="2025-12-02 10:35:11.420650731 +0000 UTC m=+1301.130017218" lastFinishedPulling="2025-12-02 10:35:15.142060763 +0000 UTC m=+1304.851427210" observedRunningTime="2025-12-02 10:35:16.314345859 +0000 UTC m=+1306.023712336" watchObservedRunningTime="2025-12-02 10:35:16.319495219 +0000 UTC m=+1306.028861696" Dec 02 10:35:17 crc kubenswrapper[4711]: I1202 10:35:17.380417 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 10:35:17 crc kubenswrapper[4711]: I1202 10:35:17.386304 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 10:35:17 crc kubenswrapper[4711]: I1202 10:35:17.395089 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 10:35:18 crc kubenswrapper[4711]: I1202 10:35:18.300372 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.213146 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.302172 4711 generic.go:334] "Generic (PLEG): container finished" podID="a67f21af-6fc5-4ea8-88ce-36e7544879ee" containerID="7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f" exitCode=137 Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.302267 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.302271 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a67f21af-6fc5-4ea8-88ce-36e7544879ee","Type":"ContainerDied","Data":"7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f"} Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.302348 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a67f21af-6fc5-4ea8-88ce-36e7544879ee","Type":"ContainerDied","Data":"973696308cc3a82994c3d66e7ebae7fad7ec982bd8527eb59731e0ae7ded0ec2"} Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.302376 4711 scope.go:117] "RemoveContainer" containerID="7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.304271 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67f21af-6fc5-4ea8-88ce-36e7544879ee-config-data\") pod \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\" (UID: \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\") " Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.304375 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67f21af-6fc5-4ea8-88ce-36e7544879ee-combined-ca-bundle\") pod \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\" (UID: \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\") " Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.305111 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4k4c\" (UniqueName: \"kubernetes.io/projected/a67f21af-6fc5-4ea8-88ce-36e7544879ee-kube-api-access-c4k4c\") pod \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\" (UID: \"a67f21af-6fc5-4ea8-88ce-36e7544879ee\") " Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.323242 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67f21af-6fc5-4ea8-88ce-36e7544879ee-kube-api-access-c4k4c" (OuterVolumeSpecName: "kube-api-access-c4k4c") pod "a67f21af-6fc5-4ea8-88ce-36e7544879ee" (UID: "a67f21af-6fc5-4ea8-88ce-36e7544879ee"). InnerVolumeSpecName "kube-api-access-c4k4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.340566 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67f21af-6fc5-4ea8-88ce-36e7544879ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a67f21af-6fc5-4ea8-88ce-36e7544879ee" (UID: "a67f21af-6fc5-4ea8-88ce-36e7544879ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.340651 4711 scope.go:117] "RemoveContainer" containerID="7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f" Dec 02 10:35:19 crc kubenswrapper[4711]: E1202 10:35:19.341236 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f\": container with ID starting with 7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f not found: ID does not exist" containerID="7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.341294 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f"} err="failed to get container status \"7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f\": rpc error: code = NotFound desc = could not find container \"7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f\": container with ID starting with 7179fd0fc6e5abae6c9e1ba566e9f0fcb998bd646570b3ea257c8e67ca6b9d1f not found: ID does not exist" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.344028 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67f21af-6fc5-4ea8-88ce-36e7544879ee-config-data" (OuterVolumeSpecName: "config-data") pod "a67f21af-6fc5-4ea8-88ce-36e7544879ee" (UID: "a67f21af-6fc5-4ea8-88ce-36e7544879ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.408524 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4k4c\" (UniqueName: \"kubernetes.io/projected/a67f21af-6fc5-4ea8-88ce-36e7544879ee-kube-api-access-c4k4c\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.408564 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67f21af-6fc5-4ea8-88ce-36e7544879ee-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.408577 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67f21af-6fc5-4ea8-88ce-36e7544879ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.652488 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.668836 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.676871 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:35:19 crc kubenswrapper[4711]: E1202 10:35:19.677744 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67f21af-6fc5-4ea8-88ce-36e7544879ee" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.677806 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67f21af-6fc5-4ea8-88ce-36e7544879ee" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.678336 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67f21af-6fc5-4ea8-88ce-36e7544879ee" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.679819 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.683600 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.684019 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.686600 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.687859 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.823500 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgm42\" (UniqueName: \"kubernetes.io/projected/8336264b-6d1c-4a37-b329-743ef0e63e48-kube-api-access-rgm42\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.823562 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8336264b-6d1c-4a37-b329-743ef0e63e48-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.823597 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8336264b-6d1c-4a37-b329-743ef0e63e48-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.823658 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8336264b-6d1c-4a37-b329-743ef0e63e48-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.823705 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8336264b-6d1c-4a37-b329-743ef0e63e48-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.925202 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8336264b-6d1c-4a37-b329-743ef0e63e48-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.926055 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8336264b-6d1c-4a37-b329-743ef0e63e48-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.926524 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgm42\" (UniqueName: \"kubernetes.io/projected/8336264b-6d1c-4a37-b329-743ef0e63e48-kube-api-access-rgm42\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.926580 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8336264b-6d1c-4a37-b329-743ef0e63e48-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.926681 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8336264b-6d1c-4a37-b329-743ef0e63e48-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.929438 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8336264b-6d1c-4a37-b329-743ef0e63e48-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.930020 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8336264b-6d1c-4a37-b329-743ef0e63e48-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.930312 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8336264b-6d1c-4a37-b329-743ef0e63e48-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.930769 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8336264b-6d1c-4a37-b329-743ef0e63e48-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:19 crc kubenswrapper[4711]: I1202 10:35:19.943120 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgm42\" (UniqueName: \"kubernetes.io/projected/8336264b-6d1c-4a37-b329-743ef0e63e48-kube-api-access-rgm42\") pod \"nova-cell1-novncproxy-0\" (UID: \"8336264b-6d1c-4a37-b329-743ef0e63e48\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:20 crc kubenswrapper[4711]: I1202 10:35:20.032460 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:20 crc kubenswrapper[4711]: W1202 10:35:20.485889 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8336264b_6d1c_4a37_b329_743ef0e63e48.slice/crio-8e96b6d8a0f1a84bef629c02bb64e6d0a25d445410267526e7d5dc49d55cdb4e WatchSource:0}: Error finding container 8e96b6d8a0f1a84bef629c02bb64e6d0a25d445410267526e7d5dc49d55cdb4e: Status 404 returned error can't find the container with id 8e96b6d8a0f1a84bef629c02bb64e6d0a25d445410267526e7d5dc49d55cdb4e Dec 02 10:35:20 crc kubenswrapper[4711]: I1202 10:35:20.488143 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:35:21 crc kubenswrapper[4711]: I1202 10:35:21.095800 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67f21af-6fc5-4ea8-88ce-36e7544879ee" path="/var/lib/kubelet/pods/a67f21af-6fc5-4ea8-88ce-36e7544879ee/volumes" Dec 02 10:35:21 crc kubenswrapper[4711]: I1202 10:35:21.336857 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8336264b-6d1c-4a37-b329-743ef0e63e48","Type":"ContainerStarted","Data":"235795604f394f48e0c5d4536012359fe2c1cca40b69eda77dd02f6d741f7584"} Dec 02 10:35:21 crc kubenswrapper[4711]: I1202 10:35:21.336893 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8336264b-6d1c-4a37-b329-743ef0e63e48","Type":"ContainerStarted","Data":"8e96b6d8a0f1a84bef629c02bb64e6d0a25d445410267526e7d5dc49d55cdb4e"} Dec 02 10:35:21 crc kubenswrapper[4711]: I1202 10:35:21.386196 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.386175467 podStartE2EDuration="2.386175467s" podCreationTimestamp="2025-12-02 10:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:21.381394817 +0000 UTC m=+1311.090761274" watchObservedRunningTime="2025-12-02 10:35:21.386175467 +0000 UTC m=+1311.095541914" Dec 02 10:35:22 crc kubenswrapper[4711]: I1202 10:35:22.547414 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 10:35:22 crc kubenswrapper[4711]: I1202 10:35:22.548865 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 10:35:22 crc kubenswrapper[4711]: I1202 10:35:22.551909 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 10:35:22 crc kubenswrapper[4711]: I1202 10:35:22.555522 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 10:35:22 crc kubenswrapper[4711]: I1202 10:35:22.586460 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:35:22 crc kubenswrapper[4711]: I1202 10:35:22.586544 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:35:22 crc kubenswrapper[4711]: E1202 10:35:22.757464 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bbf095c_359d_4e14_95e8_d75e57a7f7c2.slice\": RecentStats: unable to find data in memory cache]" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.361125 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.367671 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.624082 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-mwvf8"] Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.625533 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.654400 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-mwvf8"] Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.725803 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zrff\" (UniqueName: \"kubernetes.io/projected/a450e16e-93e0-4525-8514-f101cc87ae8b-kube-api-access-5zrff\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.726104 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.726124 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.726141 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-config\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.726160 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.726266 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.828504 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.828609 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zrff\" (UniqueName: \"kubernetes.io/projected/a450e16e-93e0-4525-8514-f101cc87ae8b-kube-api-access-5zrff\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.828656 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.828680 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.828701 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-config\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.828721 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.829743 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.829741 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.830524 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.830540 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.831158 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-config\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.859977 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zrff\" (UniqueName: \"kubernetes.io/projected/a450e16e-93e0-4525-8514-f101cc87ae8b-kube-api-access-5zrff\") pod \"dnsmasq-dns-cd5cbd7b9-mwvf8\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:23 crc kubenswrapper[4711]: I1202 10:35:23.969198 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:24 crc kubenswrapper[4711]: I1202 10:35:24.440430 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-mwvf8"] Dec 02 10:35:24 crc kubenswrapper[4711]: W1202 10:35:24.442409 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda450e16e_93e0_4525_8514_f101cc87ae8b.slice/crio-b9f4627485a6df0e8b58bdd1292a8edbe32d5aaa5ebbe38b68e8ce59feb5b898 WatchSource:0}: Error finding container b9f4627485a6df0e8b58bdd1292a8edbe32d5aaa5ebbe38b68e8ce59feb5b898: Status 404 returned error can't find the container with id b9f4627485a6df0e8b58bdd1292a8edbe32d5aaa5ebbe38b68e8ce59feb5b898 Dec 02 10:35:25 crc kubenswrapper[4711]: I1202 10:35:25.033156 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:25 crc kubenswrapper[4711]: I1202 10:35:25.428236 4711 generic.go:334] "Generic (PLEG): container finished" podID="a450e16e-93e0-4525-8514-f101cc87ae8b" containerID="057ca564a653ae77cbd215e8fd11c68a706d7c71b950ef9a4e6362c8787da0ed" exitCode=0 Dec 02 10:35:25 crc kubenswrapper[4711]: I1202 10:35:25.430004 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" event={"ID":"a450e16e-93e0-4525-8514-f101cc87ae8b","Type":"ContainerDied","Data":"057ca564a653ae77cbd215e8fd11c68a706d7c71b950ef9a4e6362c8787da0ed"} Dec 02 10:35:25 crc kubenswrapper[4711]: I1202 10:35:25.430052 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" event={"ID":"a450e16e-93e0-4525-8514-f101cc87ae8b","Type":"ContainerStarted","Data":"b9f4627485a6df0e8b58bdd1292a8edbe32d5aaa5ebbe38b68e8ce59feb5b898"} Dec 02 10:35:25 crc kubenswrapper[4711]: I1202 10:35:25.723475 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:25 crc kubenswrapper[4711]: I1202 10:35:25.723807 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="ceilometer-central-agent" containerID="cri-o://4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489" gracePeriod=30 Dec 02 10:35:25 crc kubenswrapper[4711]: I1202 10:35:25.723928 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="sg-core" containerID="cri-o://e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e" gracePeriod=30 Dec 02 10:35:25 crc kubenswrapper[4711]: I1202 10:35:25.723994 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="ceilometer-notification-agent" containerID="cri-o://ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2" gracePeriod=30 Dec 02 10:35:25 crc kubenswrapper[4711]: I1202 10:35:25.723926 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="proxy-httpd" containerID="cri-o://85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383" gracePeriod=30 Dec 02 10:35:25 crc kubenswrapper[4711]: I1202 10:35:25.745845 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.197:3000/\": EOF" Dec 02 10:35:26 crc kubenswrapper[4711]: I1202 10:35:26.089749 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:26 crc kubenswrapper[4711]: I1202 10:35:26.438464 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" event={"ID":"a450e16e-93e0-4525-8514-f101cc87ae8b","Type":"ContainerStarted","Data":"e3b3c062d14e84cb43d5d443ab2c46bb79aa537c709cea928c4a78d1cd0beb19"} Dec 02 10:35:26 crc kubenswrapper[4711]: I1202 10:35:26.438835 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:26 crc kubenswrapper[4711]: I1202 10:35:26.442110 4711 generic.go:334] "Generic (PLEG): container finished" podID="40cf6557-cfcf-4840-b90d-f116950455b3" containerID="85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383" exitCode=0 Dec 02 10:35:26 crc kubenswrapper[4711]: I1202 10:35:26.442147 4711 generic.go:334] "Generic (PLEG): container finished" podID="40cf6557-cfcf-4840-b90d-f116950455b3" containerID="e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e" exitCode=2 Dec 02 10:35:26 crc kubenswrapper[4711]: I1202 10:35:26.442159 4711 generic.go:334] "Generic (PLEG): container finished" podID="40cf6557-cfcf-4840-b90d-f116950455b3" containerID="4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489" exitCode=0 Dec 02 10:35:26 crc kubenswrapper[4711]: I1202 10:35:26.442196 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40cf6557-cfcf-4840-b90d-f116950455b3","Type":"ContainerDied","Data":"85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383"} Dec 02 10:35:26 crc kubenswrapper[4711]: I1202 10:35:26.442238 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40cf6557-cfcf-4840-b90d-f116950455b3","Type":"ContainerDied","Data":"e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e"} Dec 02 10:35:26 crc kubenswrapper[4711]: I1202 10:35:26.442250 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40cf6557-cfcf-4840-b90d-f116950455b3","Type":"ContainerDied","Data":"4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489"} Dec 02 10:35:26 crc kubenswrapper[4711]: I1202 10:35:26.442343 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="50e76b45-5c39-4333-9993-421478cdf150" containerName="nova-api-log" containerID="cri-o://6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e" gracePeriod=30 Dec 02 10:35:26 crc kubenswrapper[4711]: I1202 10:35:26.442391 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="50e76b45-5c39-4333-9993-421478cdf150" containerName="nova-api-api" containerID="cri-o://b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353" gracePeriod=30 Dec 02 10:35:26 crc kubenswrapper[4711]: I1202 10:35:26.463917 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" podStartSLOduration=3.463898694 podStartE2EDuration="3.463898694s" podCreationTimestamp="2025-12-02 10:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:26.457726857 +0000 UTC m=+1316.167093304" watchObservedRunningTime="2025-12-02 10:35:26.463898694 +0000 UTC m=+1316.173265141" Dec 02 10:35:27 crc kubenswrapper[4711]: I1202 10:35:27.453154 4711 generic.go:334] "Generic (PLEG): container finished" podID="50e76b45-5c39-4333-9993-421478cdf150" containerID="6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e" exitCode=143 Dec 02 10:35:27 crc kubenswrapper[4711]: I1202 10:35:27.454442 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50e76b45-5c39-4333-9993-421478cdf150","Type":"ContainerDied","Data":"6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e"} Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.140107 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.226418 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-scripts\") pod \"40cf6557-cfcf-4840-b90d-f116950455b3\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.226473 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40cf6557-cfcf-4840-b90d-f116950455b3-log-httpd\") pod \"40cf6557-cfcf-4840-b90d-f116950455b3\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.226537 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-sg-core-conf-yaml\") pod \"40cf6557-cfcf-4840-b90d-f116950455b3\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.226560 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40cf6557-cfcf-4840-b90d-f116950455b3-run-httpd\") pod \"40cf6557-cfcf-4840-b90d-f116950455b3\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.226654 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-combined-ca-bundle\") pod \"40cf6557-cfcf-4840-b90d-f116950455b3\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.226732 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-config-data\") pod \"40cf6557-cfcf-4840-b90d-f116950455b3\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.226777 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-ceilometer-tls-certs\") pod \"40cf6557-cfcf-4840-b90d-f116950455b3\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.226868 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm4db\" (UniqueName: \"kubernetes.io/projected/40cf6557-cfcf-4840-b90d-f116950455b3-kube-api-access-lm4db\") pod \"40cf6557-cfcf-4840-b90d-f116950455b3\" (UID: \"40cf6557-cfcf-4840-b90d-f116950455b3\") " Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.227143 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40cf6557-cfcf-4840-b90d-f116950455b3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "40cf6557-cfcf-4840-b90d-f116950455b3" (UID: "40cf6557-cfcf-4840-b90d-f116950455b3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.227843 4711 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40cf6557-cfcf-4840-b90d-f116950455b3-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.232998 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40cf6557-cfcf-4840-b90d-f116950455b3-kube-api-access-lm4db" (OuterVolumeSpecName: "kube-api-access-lm4db") pod "40cf6557-cfcf-4840-b90d-f116950455b3" (UID: "40cf6557-cfcf-4840-b90d-f116950455b3"). InnerVolumeSpecName "kube-api-access-lm4db". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.233444 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40cf6557-cfcf-4840-b90d-f116950455b3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "40cf6557-cfcf-4840-b90d-f116950455b3" (UID: "40cf6557-cfcf-4840-b90d-f116950455b3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.238719 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-scripts" (OuterVolumeSpecName: "scripts") pod "40cf6557-cfcf-4840-b90d-f116950455b3" (UID: "40cf6557-cfcf-4840-b90d-f116950455b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.288044 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "40cf6557-cfcf-4840-b90d-f116950455b3" (UID: "40cf6557-cfcf-4840-b90d-f116950455b3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.291716 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "40cf6557-cfcf-4840-b90d-f116950455b3" (UID: "40cf6557-cfcf-4840-b90d-f116950455b3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.316357 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40cf6557-cfcf-4840-b90d-f116950455b3" (UID: "40cf6557-cfcf-4840-b90d-f116950455b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.330328 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.330365 4711 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.330378 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm4db\" (UniqueName: \"kubernetes.io/projected/40cf6557-cfcf-4840-b90d-f116950455b3-kube-api-access-lm4db\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.330389 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.330399 4711 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.330408 4711 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40cf6557-cfcf-4840-b90d-f116950455b3-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.343469 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-config-data" (OuterVolumeSpecName: "config-data") pod "40cf6557-cfcf-4840-b90d-f116950455b3" (UID: "40cf6557-cfcf-4840-b90d-f116950455b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.431840 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40cf6557-cfcf-4840-b90d-f116950455b3-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.476797 4711 generic.go:334] "Generic (PLEG): container finished" podID="40cf6557-cfcf-4840-b90d-f116950455b3" containerID="ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2" exitCode=0 Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.476842 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40cf6557-cfcf-4840-b90d-f116950455b3","Type":"ContainerDied","Data":"ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2"} Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.476870 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40cf6557-cfcf-4840-b90d-f116950455b3","Type":"ContainerDied","Data":"444ba9346b28b260aebe0589f098c025503e1c69d6b9b3538b945f7327818613"} Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.476869 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.476889 4711 scope.go:117] "RemoveContainer" containerID="85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.502636 4711 scope.go:117] "RemoveContainer" containerID="e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.526929 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.532217 4711 scope.go:117] "RemoveContainer" containerID="ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.536599 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.566171 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:29 crc kubenswrapper[4711]: E1202 10:35:29.566707 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="ceilometer-central-agent" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.566739 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="ceilometer-central-agent" Dec 02 10:35:29 crc kubenswrapper[4711]: E1202 10:35:29.566782 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="sg-core" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.566795 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="sg-core" Dec 02 10:35:29 crc kubenswrapper[4711]: E1202 10:35:29.566814 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="ceilometer-notification-agent" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.566826 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="ceilometer-notification-agent" Dec 02 10:35:29 crc kubenswrapper[4711]: E1202 10:35:29.566849 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="proxy-httpd" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.566860 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="proxy-httpd" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.567242 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="proxy-httpd" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.567272 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="ceilometer-notification-agent" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.567376 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="sg-core" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.567400 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" containerName="ceilometer-central-agent" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.569800 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.578277 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.578416 4711 scope.go:117] "RemoveContainer" containerID="4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.579387 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.579428 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.580429 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.607906 4711 scope.go:117] "RemoveContainer" containerID="85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383" Dec 02 10:35:29 crc kubenswrapper[4711]: E1202 10:35:29.608358 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383\": container with ID starting with 85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383 not found: ID does not exist" containerID="85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.608395 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383"} err="failed to get container status \"85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383\": rpc error: code = NotFound desc = could not find container \"85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383\": container with ID starting with 85d29bdf919e4d9bc75348aa418ba48a61fe5e881852d0263f1a24ec05b3b383 not found: ID does not exist" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.608423 4711 scope.go:117] "RemoveContainer" containerID="e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e" Dec 02 10:35:29 crc kubenswrapper[4711]: E1202 10:35:29.608806 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e\": container with ID starting with e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e not found: ID does not exist" containerID="e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.608876 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e"} err="failed to get container status \"e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e\": rpc error: code = NotFound desc = could not find container \"e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e\": container with ID starting with e6292c7150b8538e6eebfc14428f1b3ce5687f2034cd7dd684df164b0f26f44e not found: ID does not exist" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.608895 4711 scope.go:117] "RemoveContainer" containerID="ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2" Dec 02 10:35:29 crc kubenswrapper[4711]: E1202 10:35:29.609357 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2\": container with ID starting with ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2 not found: ID does not exist" containerID="ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.609389 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2"} err="failed to get container status \"ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2\": rpc error: code = NotFound desc = could not find container \"ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2\": container with ID starting with ee6990a94636452f78d92c858234c8f8745441f1a16351d8202b3fc5cce17ea2 not found: ID does not exist" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.609414 4711 scope.go:117] "RemoveContainer" containerID="4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489" Dec 02 10:35:29 crc kubenswrapper[4711]: E1202 10:35:29.609755 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489\": container with ID starting with 4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489 not found: ID does not exist" containerID="4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.609846 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489"} err="failed to get container status \"4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489\": rpc error: code = NotFound desc = could not find container \"4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489\": container with ID starting with 4d65e458b1bb4abf4375b207e0c1052fe3d3e1a8ab2ebbfa5644933fbb407489 not found: ID does not exist" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.635681 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.635730 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-scripts\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.635802 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-config-data\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.635868 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.635889 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8c836c-181d-4c74-8cfc-7e66357bed76-run-httpd\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.635907 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmql\" (UniqueName: \"kubernetes.io/projected/fe8c836c-181d-4c74-8cfc-7e66357bed76-kube-api-access-kxmql\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.635939 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8c836c-181d-4c74-8cfc-7e66357bed76-log-httpd\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.635986 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.738182 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.738224 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8c836c-181d-4c74-8cfc-7e66357bed76-run-httpd\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.738246 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxmql\" (UniqueName: \"kubernetes.io/projected/fe8c836c-181d-4c74-8cfc-7e66357bed76-kube-api-access-kxmql\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.738277 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8c836c-181d-4c74-8cfc-7e66357bed76-log-httpd\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.738310 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.738396 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.738420 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-scripts\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.738493 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-config-data\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.738911 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8c836c-181d-4c74-8cfc-7e66357bed76-log-httpd\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.739259 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8c836c-181d-4c74-8cfc-7e66357bed76-run-httpd\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.744011 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.745119 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-config-data\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.745712 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.745799 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.747533 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8c836c-181d-4c74-8cfc-7e66357bed76-scripts\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.764062 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxmql\" (UniqueName: \"kubernetes.io/projected/fe8c836c-181d-4c74-8cfc-7e66357bed76-kube-api-access-kxmql\") pod \"ceilometer-0\" (UID: \"fe8c836c-181d-4c74-8cfc-7e66357bed76\") " pod="openstack/ceilometer-0" Dec 02 10:35:29 crc kubenswrapper[4711]: I1202 10:35:29.901603 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.033859 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.074296 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.202143 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.358192 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg4zz\" (UniqueName: \"kubernetes.io/projected/50e76b45-5c39-4333-9993-421478cdf150-kube-api-access-kg4zz\") pod \"50e76b45-5c39-4333-9993-421478cdf150\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.358563 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e76b45-5c39-4333-9993-421478cdf150-config-data\") pod \"50e76b45-5c39-4333-9993-421478cdf150\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.359230 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50e76b45-5c39-4333-9993-421478cdf150-logs\") pod \"50e76b45-5c39-4333-9993-421478cdf150\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.359561 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e76b45-5c39-4333-9993-421478cdf150-combined-ca-bundle\") pod \"50e76b45-5c39-4333-9993-421478cdf150\" (UID: \"50e76b45-5c39-4333-9993-421478cdf150\") " Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.359825 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50e76b45-5c39-4333-9993-421478cdf150-logs" (OuterVolumeSpecName: "logs") pod "50e76b45-5c39-4333-9993-421478cdf150" (UID: "50e76b45-5c39-4333-9993-421478cdf150"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.363836 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e76b45-5c39-4333-9993-421478cdf150-kube-api-access-kg4zz" (OuterVolumeSpecName: "kube-api-access-kg4zz") pod "50e76b45-5c39-4333-9993-421478cdf150" (UID: "50e76b45-5c39-4333-9993-421478cdf150"). InnerVolumeSpecName "kube-api-access-kg4zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.402640 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e76b45-5c39-4333-9993-421478cdf150-config-data" (OuterVolumeSpecName: "config-data") pod "50e76b45-5c39-4333-9993-421478cdf150" (UID: "50e76b45-5c39-4333-9993-421478cdf150"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.425391 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e76b45-5c39-4333-9993-421478cdf150-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50e76b45-5c39-4333-9993-421478cdf150" (UID: "50e76b45-5c39-4333-9993-421478cdf150"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.452776 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.461713 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50e76b45-5c39-4333-9993-421478cdf150-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.461972 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e76b45-5c39-4333-9993-421478cdf150-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.461982 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg4zz\" (UniqueName: \"kubernetes.io/projected/50e76b45-5c39-4333-9993-421478cdf150-kube-api-access-kg4zz\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.461992 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e76b45-5c39-4333-9993-421478cdf150-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.488988 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8c836c-181d-4c74-8cfc-7e66357bed76","Type":"ContainerStarted","Data":"07534276135eaeac218adaaf65ef858938bde897c1c8ff5634f2ae2e0d1a1e6a"} Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.492599 4711 generic.go:334] "Generic (PLEG): container finished" podID="50e76b45-5c39-4333-9993-421478cdf150" containerID="b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353" exitCode=0 Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.492728 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50e76b45-5c39-4333-9993-421478cdf150","Type":"ContainerDied","Data":"b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353"} Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.492809 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50e76b45-5c39-4333-9993-421478cdf150","Type":"ContainerDied","Data":"2045b8048c50c84b72ceb23bda9702d41c760a4843940be6056053b3eed4a57a"} Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.492905 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.492899 4711 scope.go:117] "RemoveContainer" containerID="b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.514212 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.528494 4711 scope.go:117] "RemoveContainer" containerID="6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.529348 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.541511 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.584056 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:30 crc kubenswrapper[4711]: E1202 10:35:30.584659 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e76b45-5c39-4333-9993-421478cdf150" containerName="nova-api-api" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.584674 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e76b45-5c39-4333-9993-421478cdf150" containerName="nova-api-api" Dec 02 10:35:30 crc kubenswrapper[4711]: E1202 10:35:30.584692 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e76b45-5c39-4333-9993-421478cdf150" containerName="nova-api-log" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.584700 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e76b45-5c39-4333-9993-421478cdf150" containerName="nova-api-log" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.585076 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e76b45-5c39-4333-9993-421478cdf150" containerName="nova-api-api" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.585099 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e76b45-5c39-4333-9993-421478cdf150" containerName="nova-api-log" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.586471 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.588349 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.592534 4711 scope.go:117] "RemoveContainer" containerID="b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353" Dec 02 10:35:30 crc kubenswrapper[4711]: E1202 10:35:30.593018 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353\": container with ID starting with b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353 not found: ID does not exist" containerID="b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.593049 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353"} err="failed to get container status \"b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353\": rpc error: code = NotFound desc = could not find container \"b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353\": container with ID starting with b7140b10a9b260fa5a729e407919a39452522bb15c874243bf4b28d62cec7353 not found: ID does not exist" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.593069 4711 scope.go:117] "RemoveContainer" containerID="6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.595081 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.595328 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 02 10:35:30 crc kubenswrapper[4711]: E1202 10:35:30.596616 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e\": container with ID starting with 6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e not found: ID does not exist" containerID="6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.596641 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e"} err="failed to get container status \"6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e\": rpc error: code = NotFound desc = could not find container \"6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e\": container with ID starting with 6880a4b6be21c1b6212462ca96a615a0a1e1f6006a1b877d870602f8bb46593e not found: ID does not exist" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.606570 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.665464 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-logs\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.665535 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-config-data\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.665585 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hcsd\" (UniqueName: \"kubernetes.io/projected/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-kube-api-access-9hcsd\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.665655 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.665678 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-internal-tls-certs\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.665806 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-public-tls-certs\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.729758 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sm4md"] Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.730993 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.733165 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.733467 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.762237 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sm4md"] Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.767282 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-config-data\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.768083 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hcsd\" (UniqueName: \"kubernetes.io/projected/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-kube-api-access-9hcsd\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.768156 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.768183 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-internal-tls-certs\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.769793 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-public-tls-certs\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.769966 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-logs\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.770281 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-logs\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.773090 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-config-data\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.773491 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.781702 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-internal-tls-certs\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.782129 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-public-tls-certs\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.784814 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hcsd\" (UniqueName: \"kubernetes.io/projected/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-kube-api-access-9hcsd\") pod \"nova-api-0\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.871242 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-config-data\") pod \"nova-cell1-cell-mapping-sm4md\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.871322 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sm4md\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.871424 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ljc\" (UniqueName: \"kubernetes.io/projected/3706775c-519a-4cf5-ad2c-7da1b55903dd-kube-api-access-d6ljc\") pod \"nova-cell1-cell-mapping-sm4md\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.871443 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-scripts\") pod \"nova-cell1-cell-mapping-sm4md\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.910211 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.973235 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-config-data\") pod \"nova-cell1-cell-mapping-sm4md\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.973639 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sm4md\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.973762 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6ljc\" (UniqueName: \"kubernetes.io/projected/3706775c-519a-4cf5-ad2c-7da1b55903dd-kube-api-access-d6ljc\") pod \"nova-cell1-cell-mapping-sm4md\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.973826 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-scripts\") pod \"nova-cell1-cell-mapping-sm4md\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.978495 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-scripts\") pod \"nova-cell1-cell-mapping-sm4md\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.982542 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-config-data\") pod \"nova-cell1-cell-mapping-sm4md\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:30 crc kubenswrapper[4711]: I1202 10:35:30.990528 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sm4md\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:31 crc kubenswrapper[4711]: I1202 10:35:31.021837 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6ljc\" (UniqueName: \"kubernetes.io/projected/3706775c-519a-4cf5-ad2c-7da1b55903dd-kube-api-access-d6ljc\") pod \"nova-cell1-cell-mapping-sm4md\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:31 crc kubenswrapper[4711]: I1202 10:35:31.048259 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:31 crc kubenswrapper[4711]: I1202 10:35:31.130304 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40cf6557-cfcf-4840-b90d-f116950455b3" path="/var/lib/kubelet/pods/40cf6557-cfcf-4840-b90d-f116950455b3/volumes" Dec 02 10:35:31 crc kubenswrapper[4711]: I1202 10:35:31.131285 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e76b45-5c39-4333-9993-421478cdf150" path="/var/lib/kubelet/pods/50e76b45-5c39-4333-9993-421478cdf150/volumes" Dec 02 10:35:31 crc kubenswrapper[4711]: W1202 10:35:31.464054 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83db10c6_d6ee_43f7_a9a0_f1153b44ba67.slice/crio-e3429f8a5ca6f15ac6230934ec3fe5c06803d72b7e8f865c906b139b05f40c5f WatchSource:0}: Error finding container e3429f8a5ca6f15ac6230934ec3fe5c06803d72b7e8f865c906b139b05f40c5f: Status 404 returned error can't find the container with id e3429f8a5ca6f15ac6230934ec3fe5c06803d72b7e8f865c906b139b05f40c5f Dec 02 10:35:31 crc kubenswrapper[4711]: I1202 10:35:31.482976 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:31 crc kubenswrapper[4711]: I1202 10:35:31.508685 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83db10c6-d6ee-43f7-a9a0-f1153b44ba67","Type":"ContainerStarted","Data":"e3429f8a5ca6f15ac6230934ec3fe5c06803d72b7e8f865c906b139b05f40c5f"} Dec 02 10:35:31 crc kubenswrapper[4711]: I1202 10:35:31.511407 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8c836c-181d-4c74-8cfc-7e66357bed76","Type":"ContainerStarted","Data":"9539d4e2cf0a2af5c9187544d8d597c05356dc6095cbb51b673edc64bc8ade5f"} Dec 02 10:35:31 crc kubenswrapper[4711]: W1202 10:35:31.650868 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3706775c_519a_4cf5_ad2c_7da1b55903dd.slice/crio-70e176f60de61c7c3c7b1d0c06682e519c2207620ae2328a9554b27515e22278 WatchSource:0}: Error finding container 70e176f60de61c7c3c7b1d0c06682e519c2207620ae2328a9554b27515e22278: Status 404 returned error can't find the container with id 70e176f60de61c7c3c7b1d0c06682e519c2207620ae2328a9554b27515e22278 Dec 02 10:35:31 crc kubenswrapper[4711]: I1202 10:35:31.657725 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sm4md"] Dec 02 10:35:32 crc kubenswrapper[4711]: I1202 10:35:32.526612 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sm4md" event={"ID":"3706775c-519a-4cf5-ad2c-7da1b55903dd","Type":"ContainerStarted","Data":"d290cd33687e5ad39858c3607f11bd8aca3f3f56b2f80318903c0a30926618d8"} Dec 02 10:35:32 crc kubenswrapper[4711]: I1202 10:35:32.526985 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sm4md" event={"ID":"3706775c-519a-4cf5-ad2c-7da1b55903dd","Type":"ContainerStarted","Data":"70e176f60de61c7c3c7b1d0c06682e519c2207620ae2328a9554b27515e22278"} Dec 02 10:35:32 crc kubenswrapper[4711]: I1202 10:35:32.528767 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83db10c6-d6ee-43f7-a9a0-f1153b44ba67","Type":"ContainerStarted","Data":"239ffe0df4fbade77329729307d0956684118079f90094f8124278b3d89bf183"} Dec 02 10:35:32 crc kubenswrapper[4711]: I1202 10:35:32.528799 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83db10c6-d6ee-43f7-a9a0-f1153b44ba67","Type":"ContainerStarted","Data":"1c17aeea9cbfd549fa8a6ff468d20fbf14f39b894a4a21a97c01406bdeecc4b6"} Dec 02 10:35:32 crc kubenswrapper[4711]: I1202 10:35:32.646920 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.646901811 podStartE2EDuration="2.646901811s" podCreationTimestamp="2025-12-02 10:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:32.644800744 +0000 UTC m=+1322.354167191" watchObservedRunningTime="2025-12-02 10:35:32.646901811 +0000 UTC m=+1322.356268258" Dec 02 10:35:32 crc kubenswrapper[4711]: I1202 10:35:32.665969 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sm4md" podStartSLOduration=2.665930317 podStartE2EDuration="2.665930317s" podCreationTimestamp="2025-12-02 10:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:32.576378049 +0000 UTC m=+1322.285744496" watchObservedRunningTime="2025-12-02 10:35:32.665930317 +0000 UTC m=+1322.375296764" Dec 02 10:35:33 crc kubenswrapper[4711]: I1202 10:35:33.542020 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8c836c-181d-4c74-8cfc-7e66357bed76","Type":"ContainerStarted","Data":"fa6cba91ae5183f164fb3a22d251e9164ffc7a8595dc3ad29e3b8fe39b521c74"} Dec 02 10:35:33 crc kubenswrapper[4711]: I1202 10:35:33.971172 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.041542 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-glhhc"] Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.041847 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" podUID="f156af1a-5f12-4bd2-a98f-14a2310e2c78" containerName="dnsmasq-dns" containerID="cri-o://e1e038cceb600722916e1e2b28551ee529e9a291fd9a4410af8b2f84d2b6ad6d" gracePeriod=10 Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.550720 4711 generic.go:334] "Generic (PLEG): container finished" podID="f156af1a-5f12-4bd2-a98f-14a2310e2c78" containerID="e1e038cceb600722916e1e2b28551ee529e9a291fd9a4410af8b2f84d2b6ad6d" exitCode=0 Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.550796 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" event={"ID":"f156af1a-5f12-4bd2-a98f-14a2310e2c78","Type":"ContainerDied","Data":"e1e038cceb600722916e1e2b28551ee529e9a291fd9a4410af8b2f84d2b6ad6d"} Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.551224 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" event={"ID":"f156af1a-5f12-4bd2-a98f-14a2310e2c78","Type":"ContainerDied","Data":"7a6b415b756403ded11b99ccd35babcfbaa859bf8ecde3e35779d6d6b05207fc"} Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.551239 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a6b415b756403ded11b99ccd35babcfbaa859bf8ecde3e35779d6d6b05207fc" Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.551372 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.557634 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8c836c-181d-4c74-8cfc-7e66357bed76","Type":"ContainerStarted","Data":"4abbd59cc19da55cd700580f3f2f750e081872d5e3bd5333c6c52444956e3208"} Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.668546 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-svc\") pod \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.668604 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-ovsdbserver-sb\") pod \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.668772 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-ovsdbserver-nb\") pod \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.668804 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d2h4\" (UniqueName: \"kubernetes.io/projected/f156af1a-5f12-4bd2-a98f-14a2310e2c78-kube-api-access-9d2h4\") pod \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.668825 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-swift-storage-0\") pod \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.668847 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-config\") pod \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.677632 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f156af1a-5f12-4bd2-a98f-14a2310e2c78-kube-api-access-9d2h4" (OuterVolumeSpecName: "kube-api-access-9d2h4") pod "f156af1a-5f12-4bd2-a98f-14a2310e2c78" (UID: "f156af1a-5f12-4bd2-a98f-14a2310e2c78"). InnerVolumeSpecName "kube-api-access-9d2h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.720606 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f156af1a-5f12-4bd2-a98f-14a2310e2c78" (UID: "f156af1a-5f12-4bd2-a98f-14a2310e2c78"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.727184 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-config" (OuterVolumeSpecName: "config") pod "f156af1a-5f12-4bd2-a98f-14a2310e2c78" (UID: "f156af1a-5f12-4bd2-a98f-14a2310e2c78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.731695 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f156af1a-5f12-4bd2-a98f-14a2310e2c78" (UID: "f156af1a-5f12-4bd2-a98f-14a2310e2c78"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:34 crc kubenswrapper[4711]: E1202 10:35:34.744639 4711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-swift-storage-0 podName:f156af1a-5f12-4bd2-a98f-14a2310e2c78 nodeName:}" failed. No retries permitted until 2025-12-02 10:35:35.244585408 +0000 UTC m=+1324.953951865 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-swift-storage-0" (UniqueName: "kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-swift-storage-0") pod "f156af1a-5f12-4bd2-a98f-14a2310e2c78" (UID: "f156af1a-5f12-4bd2-a98f-14a2310e2c78") : error deleting /var/lib/kubelet/pods/f156af1a-5f12-4bd2-a98f-14a2310e2c78/volume-subpaths: remove /var/lib/kubelet/pods/f156af1a-5f12-4bd2-a98f-14a2310e2c78/volume-subpaths: no such file or directory Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.744975 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f156af1a-5f12-4bd2-a98f-14a2310e2c78" (UID: "f156af1a-5f12-4bd2-a98f-14a2310e2c78"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.770987 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.771021 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d2h4\" (UniqueName: \"kubernetes.io/projected/f156af1a-5f12-4bd2-a98f-14a2310e2c78-kube-api-access-9d2h4\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.771031 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.771040 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:34 crc kubenswrapper[4711]: I1202 10:35:34.771047 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:35 crc kubenswrapper[4711]: I1202 10:35:35.280751 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-swift-storage-0\") pod \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\" (UID: \"f156af1a-5f12-4bd2-a98f-14a2310e2c78\") " Dec 02 10:35:35 crc kubenswrapper[4711]: I1202 10:35:35.281597 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f156af1a-5f12-4bd2-a98f-14a2310e2c78" (UID: "f156af1a-5f12-4bd2-a98f-14a2310e2c78"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:35 crc kubenswrapper[4711]: I1202 10:35:35.383728 4711 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f156af1a-5f12-4bd2-a98f-14a2310e2c78-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:35 crc kubenswrapper[4711]: I1202 10:35:35.569068 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" Dec 02 10:35:35 crc kubenswrapper[4711]: I1202 10:35:35.569218 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8c836c-181d-4c74-8cfc-7e66357bed76","Type":"ContainerStarted","Data":"605b76d70cea82c39c87b7eb8f0595b55c6c8ae7f27aa27ba97d3e7068a6ea47"} Dec 02 10:35:35 crc kubenswrapper[4711]: I1202 10:35:35.603857 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.137845315 podStartE2EDuration="6.603833475s" podCreationTimestamp="2025-12-02 10:35:29 +0000 UTC" firstStartedPulling="2025-12-02 10:35:30.457344584 +0000 UTC m=+1320.166711051" lastFinishedPulling="2025-12-02 10:35:34.923332754 +0000 UTC m=+1324.632699211" observedRunningTime="2025-12-02 10:35:35.595790348 +0000 UTC m=+1325.305156855" watchObservedRunningTime="2025-12-02 10:35:35.603833475 +0000 UTC m=+1325.313199922" Dec 02 10:35:35 crc kubenswrapper[4711]: I1202 10:35:35.624058 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-glhhc"] Dec 02 10:35:35 crc kubenswrapper[4711]: I1202 10:35:35.633978 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-glhhc"] Dec 02 10:35:36 crc kubenswrapper[4711]: I1202 10:35:36.577627 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:35:37 crc kubenswrapper[4711]: I1202 10:35:37.098735 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f156af1a-5f12-4bd2-a98f-14a2310e2c78" path="/var/lib/kubelet/pods/f156af1a-5f12-4bd2-a98f-14a2310e2c78/volumes" Dec 02 10:35:37 crc kubenswrapper[4711]: I1202 10:35:37.592997 4711 generic.go:334] "Generic (PLEG): container finished" podID="3706775c-519a-4cf5-ad2c-7da1b55903dd" containerID="d290cd33687e5ad39858c3607f11bd8aca3f3f56b2f80318903c0a30926618d8" exitCode=0 Dec 02 10:35:37 crc kubenswrapper[4711]: I1202 10:35:37.593120 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sm4md" event={"ID":"3706775c-519a-4cf5-ad2c-7da1b55903dd","Type":"ContainerDied","Data":"d290cd33687e5ad39858c3607f11bd8aca3f3f56b2f80318903c0a30926618d8"} Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.066318 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.156181 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-config-data\") pod \"3706775c-519a-4cf5-ad2c-7da1b55903dd\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.156262 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-combined-ca-bundle\") pod \"3706775c-519a-4cf5-ad2c-7da1b55903dd\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.156412 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6ljc\" (UniqueName: \"kubernetes.io/projected/3706775c-519a-4cf5-ad2c-7da1b55903dd-kube-api-access-d6ljc\") pod \"3706775c-519a-4cf5-ad2c-7da1b55903dd\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.156544 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-scripts\") pod \"3706775c-519a-4cf5-ad2c-7da1b55903dd\" (UID: \"3706775c-519a-4cf5-ad2c-7da1b55903dd\") " Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.163254 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3706775c-519a-4cf5-ad2c-7da1b55903dd-kube-api-access-d6ljc" (OuterVolumeSpecName: "kube-api-access-d6ljc") pod "3706775c-519a-4cf5-ad2c-7da1b55903dd" (UID: "3706775c-519a-4cf5-ad2c-7da1b55903dd"). InnerVolumeSpecName "kube-api-access-d6ljc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.163275 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-scripts" (OuterVolumeSpecName: "scripts") pod "3706775c-519a-4cf5-ad2c-7da1b55903dd" (UID: "3706775c-519a-4cf5-ad2c-7da1b55903dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.188192 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3706775c-519a-4cf5-ad2c-7da1b55903dd" (UID: "3706775c-519a-4cf5-ad2c-7da1b55903dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.203815 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-config-data" (OuterVolumeSpecName: "config-data") pod "3706775c-519a-4cf5-ad2c-7da1b55903dd" (UID: "3706775c-519a-4cf5-ad2c-7da1b55903dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.258793 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.258819 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.258830 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6ljc\" (UniqueName: \"kubernetes.io/projected/3706775c-519a-4cf5-ad2c-7da1b55903dd-kube-api-access-d6ljc\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.258840 4711 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3706775c-519a-4cf5-ad2c-7da1b55903dd-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.294647 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bccf8f775-glhhc" podUID="f156af1a-5f12-4bd2-a98f-14a2310e2c78" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: i/o timeout" Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.614668 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sm4md" event={"ID":"3706775c-519a-4cf5-ad2c-7da1b55903dd","Type":"ContainerDied","Data":"70e176f60de61c7c3c7b1d0c06682e519c2207620ae2328a9554b27515e22278"} Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.614712 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70e176f60de61c7c3c7b1d0c06682e519c2207620ae2328a9554b27515e22278" Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.614770 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sm4md" Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.821196 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.821545 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83db10c6-d6ee-43f7-a9a0-f1153b44ba67" containerName="nova-api-log" containerID="cri-o://1c17aeea9cbfd549fa8a6ff468d20fbf14f39b894a4a21a97c01406bdeecc4b6" gracePeriod=30 Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.822122 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83db10c6-d6ee-43f7-a9a0-f1153b44ba67" containerName="nova-api-api" containerID="cri-o://239ffe0df4fbade77329729307d0956684118079f90094f8124278b3d89bf183" gracePeriod=30 Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.835525 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.835746 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerName="nova-metadata-log" containerID="cri-o://bc4949427ca0b6989ec324573b574190e876c1172966ea72738dd24c11e6332c" gracePeriod=30 Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.836155 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerName="nova-metadata-metadata" containerID="cri-o://5a4c7d474181d001c638bb6d4be556d96d9723143fac14c610ed7a43facb3316" gracePeriod=30 Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.849400 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:35:39 crc kubenswrapper[4711]: I1202 10:35:39.849627 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d5a70ce3-0321-415c-8b3a-8b7cea271106" containerName="nova-scheduler-scheduler" containerID="cri-o://4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad" gracePeriod=30 Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.623507 4711 generic.go:334] "Generic (PLEG): container finished" podID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerID="bc4949427ca0b6989ec324573b574190e876c1172966ea72738dd24c11e6332c" exitCode=143 Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.623586 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df7e62eb-715e-43c5-af99-1a9620eeb8b6","Type":"ContainerDied","Data":"bc4949427ca0b6989ec324573b574190e876c1172966ea72738dd24c11e6332c"} Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.625859 4711 generic.go:334] "Generic (PLEG): container finished" podID="83db10c6-d6ee-43f7-a9a0-f1153b44ba67" containerID="239ffe0df4fbade77329729307d0956684118079f90094f8124278b3d89bf183" exitCode=0 Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.625878 4711 generic.go:334] "Generic (PLEG): container finished" podID="83db10c6-d6ee-43f7-a9a0-f1153b44ba67" containerID="1c17aeea9cbfd549fa8a6ff468d20fbf14f39b894a4a21a97c01406bdeecc4b6" exitCode=143 Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.625898 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83db10c6-d6ee-43f7-a9a0-f1153b44ba67","Type":"ContainerDied","Data":"239ffe0df4fbade77329729307d0956684118079f90094f8124278b3d89bf183"} Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.625919 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83db10c6-d6ee-43f7-a9a0-f1153b44ba67","Type":"ContainerDied","Data":"1c17aeea9cbfd549fa8a6ff468d20fbf14f39b894a4a21a97c01406bdeecc4b6"} Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.726417 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.791299 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-combined-ca-bundle\") pod \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.791421 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-public-tls-certs\") pod \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.791451 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hcsd\" (UniqueName: \"kubernetes.io/projected/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-kube-api-access-9hcsd\") pod \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.791676 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-logs\") pod \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.791741 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-config-data\") pod \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.791767 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-internal-tls-certs\") pod \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\" (UID: \"83db10c6-d6ee-43f7-a9a0-f1153b44ba67\") " Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.792223 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-logs" (OuterVolumeSpecName: "logs") pod "83db10c6-d6ee-43f7-a9a0-f1153b44ba67" (UID: "83db10c6-d6ee-43f7-a9a0-f1153b44ba67"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.800279 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-kube-api-access-9hcsd" (OuterVolumeSpecName: "kube-api-access-9hcsd") pod "83db10c6-d6ee-43f7-a9a0-f1153b44ba67" (UID: "83db10c6-d6ee-43f7-a9a0-f1153b44ba67"). InnerVolumeSpecName "kube-api-access-9hcsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.840989 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-config-data" (OuterVolumeSpecName: "config-data") pod "83db10c6-d6ee-43f7-a9a0-f1153b44ba67" (UID: "83db10c6-d6ee-43f7-a9a0-f1153b44ba67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.861083 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83db10c6-d6ee-43f7-a9a0-f1153b44ba67" (UID: "83db10c6-d6ee-43f7-a9a0-f1153b44ba67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.863926 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "83db10c6-d6ee-43f7-a9a0-f1153b44ba67" (UID: "83db10c6-d6ee-43f7-a9a0-f1153b44ba67"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.872618 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "83db10c6-d6ee-43f7-a9a0-f1153b44ba67" (UID: "83db10c6-d6ee-43f7-a9a0-f1153b44ba67"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.894230 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.894267 4711 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.894277 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hcsd\" (UniqueName: \"kubernetes.io/projected/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-kube-api-access-9hcsd\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.894288 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.894297 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:40 crc kubenswrapper[4711]: I1202 10:35:40.894305 4711 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83db10c6-d6ee-43f7-a9a0-f1153b44ba67-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:41 crc kubenswrapper[4711]: E1202 10:35:41.472962 4711 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 10:35:41 crc kubenswrapper[4711]: E1202 10:35:41.474224 4711 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 10:35:41 crc kubenswrapper[4711]: E1202 10:35:41.475914 4711 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 10:35:41 crc kubenswrapper[4711]: E1202 10:35:41.476102 4711 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d5a70ce3-0321-415c-8b3a-8b7cea271106" containerName="nova-scheduler-scheduler" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.642237 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83db10c6-d6ee-43f7-a9a0-f1153b44ba67","Type":"ContainerDied","Data":"e3429f8a5ca6f15ac6230934ec3fe5c06803d72b7e8f865c906b139b05f40c5f"} Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.642321 4711 scope.go:117] "RemoveContainer" containerID="239ffe0df4fbade77329729307d0956684118079f90094f8124278b3d89bf183" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.642359 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.674174 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.681493 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.684077 4711 scope.go:117] "RemoveContainer" containerID="1c17aeea9cbfd549fa8a6ff468d20fbf14f39b894a4a21a97c01406bdeecc4b6" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.693366 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:41 crc kubenswrapper[4711]: E1202 10:35:41.693845 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f156af1a-5f12-4bd2-a98f-14a2310e2c78" containerName="init" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.693858 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f156af1a-5f12-4bd2-a98f-14a2310e2c78" containerName="init" Dec 02 10:35:41 crc kubenswrapper[4711]: E1202 10:35:41.693873 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83db10c6-d6ee-43f7-a9a0-f1153b44ba67" containerName="nova-api-api" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.693880 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="83db10c6-d6ee-43f7-a9a0-f1153b44ba67" containerName="nova-api-api" Dec 02 10:35:41 crc kubenswrapper[4711]: E1202 10:35:41.693897 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83db10c6-d6ee-43f7-a9a0-f1153b44ba67" containerName="nova-api-log" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.693904 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="83db10c6-d6ee-43f7-a9a0-f1153b44ba67" containerName="nova-api-log" Dec 02 10:35:41 crc kubenswrapper[4711]: E1202 10:35:41.693924 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3706775c-519a-4cf5-ad2c-7da1b55903dd" containerName="nova-manage" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.693929 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="3706775c-519a-4cf5-ad2c-7da1b55903dd" containerName="nova-manage" Dec 02 10:35:41 crc kubenswrapper[4711]: E1202 10:35:41.693946 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f156af1a-5f12-4bd2-a98f-14a2310e2c78" containerName="dnsmasq-dns" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.693967 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="f156af1a-5f12-4bd2-a98f-14a2310e2c78" containerName="dnsmasq-dns" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.694121 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="3706775c-519a-4cf5-ad2c-7da1b55903dd" containerName="nova-manage" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.694130 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="83db10c6-d6ee-43f7-a9a0-f1153b44ba67" containerName="nova-api-api" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.694145 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="f156af1a-5f12-4bd2-a98f-14a2310e2c78" containerName="dnsmasq-dns" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.694164 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="83db10c6-d6ee-43f7-a9a0-f1153b44ba67" containerName="nova-api-log" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.695167 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.709989 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.710222 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.710344 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.715176 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.816168 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-public-tls-certs\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.816232 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz98s\" (UniqueName: \"kubernetes.io/projected/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-kube-api-access-vz98s\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.816271 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-config-data\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.816541 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.816831 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.817120 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-logs\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.919368 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-logs\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.919510 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-public-tls-certs\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.919617 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz98s\" (UniqueName: \"kubernetes.io/projected/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-kube-api-access-vz98s\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.919690 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-config-data\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.919813 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.919896 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.922617 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-logs\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.927546 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-public-tls-certs\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.927835 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-config-data\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.932413 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.932440 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:41 crc kubenswrapper[4711]: I1202 10:35:41.940486 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz98s\" (UniqueName: \"kubernetes.io/projected/d6a280ba-4feb-4ffd-8452-a4e7d2c6512b-kube-api-access-vz98s\") pod \"nova-api-0\" (UID: \"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b\") " pod="openstack/nova-api-0" Dec 02 10:35:42 crc kubenswrapper[4711]: I1202 10:35:42.041678 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:35:42 crc kubenswrapper[4711]: I1202 10:35:42.471808 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:35:42 crc kubenswrapper[4711]: I1202 10:35:42.653437 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b","Type":"ContainerStarted","Data":"773cbbfdb6bc6733dca88ff83e959453de6f9d440d491cae4f6d3e44e396c55d"} Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.094712 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83db10c6-d6ee-43f7-a9a0-f1153b44ba67" path="/var/lib/kubelet/pods/83db10c6-d6ee-43f7-a9a0-f1153b44ba67/volumes" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.281120 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:33416->10.217.0.193:8775: read: connection reset by peer" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.281237 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:33414->10.217.0.193:8775: read: connection reset by peer" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.680108 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b","Type":"ContainerStarted","Data":"5783ec4beb916c25fe7d2d5b90795eb715b18d52e350e32904cf4ac57d572bcd"} Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.680465 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d6a280ba-4feb-4ffd-8452-a4e7d2c6512b","Type":"ContainerStarted","Data":"81c0fb31d4547eaf8670dff4b1289f7096aa8b16d718af45d829a591ef0c9ab2"} Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.686762 4711 generic.go:334] "Generic (PLEG): container finished" podID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerID="5a4c7d474181d001c638bb6d4be556d96d9723143fac14c610ed7a43facb3316" exitCode=0 Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.686806 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df7e62eb-715e-43c5-af99-1a9620eeb8b6","Type":"ContainerDied","Data":"5a4c7d474181d001c638bb6d4be556d96d9723143fac14c610ed7a43facb3316"} Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.715044 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.715019622 podStartE2EDuration="2.715019622s" podCreationTimestamp="2025-12-02 10:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:43.71158364 +0000 UTC m=+1333.420950107" watchObservedRunningTime="2025-12-02 10:35:43.715019622 +0000 UTC m=+1333.424386079" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.807281 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.856195 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df7e62eb-715e-43c5-af99-1a9620eeb8b6-logs\") pod \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.856387 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2wqz\" (UniqueName: \"kubernetes.io/projected/df7e62eb-715e-43c5-af99-1a9620eeb8b6-kube-api-access-v2wqz\") pod \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.856418 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-combined-ca-bundle\") pod \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.856549 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-nova-metadata-tls-certs\") pod \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.856644 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-config-data\") pod \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\" (UID: \"df7e62eb-715e-43c5-af99-1a9620eeb8b6\") " Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.858787 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7e62eb-715e-43c5-af99-1a9620eeb8b6-logs" (OuterVolumeSpecName: "logs") pod "df7e62eb-715e-43c5-af99-1a9620eeb8b6" (UID: "df7e62eb-715e-43c5-af99-1a9620eeb8b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.865526 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7e62eb-715e-43c5-af99-1a9620eeb8b6-kube-api-access-v2wqz" (OuterVolumeSpecName: "kube-api-access-v2wqz") pod "df7e62eb-715e-43c5-af99-1a9620eeb8b6" (UID: "df7e62eb-715e-43c5-af99-1a9620eeb8b6"). InnerVolumeSpecName "kube-api-access-v2wqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.899248 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-config-data" (OuterVolumeSpecName: "config-data") pod "df7e62eb-715e-43c5-af99-1a9620eeb8b6" (UID: "df7e62eb-715e-43c5-af99-1a9620eeb8b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.910535 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df7e62eb-715e-43c5-af99-1a9620eeb8b6" (UID: "df7e62eb-715e-43c5-af99-1a9620eeb8b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.913944 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "df7e62eb-715e-43c5-af99-1a9620eeb8b6" (UID: "df7e62eb-715e-43c5-af99-1a9620eeb8b6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.959684 4711 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.959772 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.959789 4711 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df7e62eb-715e-43c5-af99-1a9620eeb8b6-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.959800 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2wqz\" (UniqueName: \"kubernetes.io/projected/df7e62eb-715e-43c5-af99-1a9620eeb8b6-kube-api-access-v2wqz\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:43 crc kubenswrapper[4711]: I1202 10:35:43.959811 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e62eb-715e-43c5-af99-1a9620eeb8b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.699788 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.700041 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df7e62eb-715e-43c5-af99-1a9620eeb8b6","Type":"ContainerDied","Data":"596e1b33810a760510ce92e2acd7373005a6380fc95e1328e1c43e5c0535b82f"} Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.700417 4711 scope.go:117] "RemoveContainer" containerID="5a4c7d474181d001c638bb6d4be556d96d9723143fac14c610ed7a43facb3316" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.734915 4711 scope.go:117] "RemoveContainer" containerID="bc4949427ca0b6989ec324573b574190e876c1172966ea72738dd24c11e6332c" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.752106 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.772573 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.788916 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:35:44 crc kubenswrapper[4711]: E1202 10:35:44.789461 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerName="nova-metadata-log" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.789478 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerName="nova-metadata-log" Dec 02 10:35:44 crc kubenswrapper[4711]: E1202 10:35:44.789488 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerName="nova-metadata-metadata" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.789496 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerName="nova-metadata-metadata" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.789743 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerName="nova-metadata-metadata" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.789780 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" containerName="nova-metadata-log" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.791314 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.798454 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.798826 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.798876 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.875317 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c513475d-590a-4821-9ee5-894e9faaef88-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.875799 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c513475d-590a-4821-9ee5-894e9faaef88-logs\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.875916 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c513475d-590a-4821-9ee5-894e9faaef88-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.875991 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c513475d-590a-4821-9ee5-894e9faaef88-config-data\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.876121 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hdl7\" (UniqueName: \"kubernetes.io/projected/c513475d-590a-4821-9ee5-894e9faaef88-kube-api-access-2hdl7\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.977991 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c513475d-590a-4821-9ee5-894e9faaef88-config-data\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.978311 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hdl7\" (UniqueName: \"kubernetes.io/projected/c513475d-590a-4821-9ee5-894e9faaef88-kube-api-access-2hdl7\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.978416 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c513475d-590a-4821-9ee5-894e9faaef88-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.978640 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c513475d-590a-4821-9ee5-894e9faaef88-logs\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.978771 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c513475d-590a-4821-9ee5-894e9faaef88-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.978990 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c513475d-590a-4821-9ee5-894e9faaef88-logs\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.983155 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c513475d-590a-4821-9ee5-894e9faaef88-config-data\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.984594 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c513475d-590a-4821-9ee5-894e9faaef88-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.986130 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c513475d-590a-4821-9ee5-894e9faaef88-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:44 crc kubenswrapper[4711]: I1202 10:35:44.996516 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hdl7\" (UniqueName: \"kubernetes.io/projected/c513475d-590a-4821-9ee5-894e9faaef88-kube-api-access-2hdl7\") pod \"nova-metadata-0\" (UID: \"c513475d-590a-4821-9ee5-894e9faaef88\") " pod="openstack/nova-metadata-0" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.099330 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7e62eb-715e-43c5-af99-1a9620eeb8b6" path="/var/lib/kubelet/pods/df7e62eb-715e-43c5-af99-1a9620eeb8b6/volumes" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.145035 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.545497 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.590558 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjk2l\" (UniqueName: \"kubernetes.io/projected/d5a70ce3-0321-415c-8b3a-8b7cea271106-kube-api-access-rjk2l\") pod \"d5a70ce3-0321-415c-8b3a-8b7cea271106\" (UID: \"d5a70ce3-0321-415c-8b3a-8b7cea271106\") " Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.590695 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a70ce3-0321-415c-8b3a-8b7cea271106-config-data\") pod \"d5a70ce3-0321-415c-8b3a-8b7cea271106\" (UID: \"d5a70ce3-0321-415c-8b3a-8b7cea271106\") " Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.591045 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a70ce3-0321-415c-8b3a-8b7cea271106-combined-ca-bundle\") pod \"d5a70ce3-0321-415c-8b3a-8b7cea271106\" (UID: \"d5a70ce3-0321-415c-8b3a-8b7cea271106\") " Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.596221 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a70ce3-0321-415c-8b3a-8b7cea271106-kube-api-access-rjk2l" (OuterVolumeSpecName: "kube-api-access-rjk2l") pod "d5a70ce3-0321-415c-8b3a-8b7cea271106" (UID: "d5a70ce3-0321-415c-8b3a-8b7cea271106"). InnerVolumeSpecName "kube-api-access-rjk2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.628326 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a70ce3-0321-415c-8b3a-8b7cea271106-config-data" (OuterVolumeSpecName: "config-data") pod "d5a70ce3-0321-415c-8b3a-8b7cea271106" (UID: "d5a70ce3-0321-415c-8b3a-8b7cea271106"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.635413 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a70ce3-0321-415c-8b3a-8b7cea271106-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5a70ce3-0321-415c-8b3a-8b7cea271106" (UID: "d5a70ce3-0321-415c-8b3a-8b7cea271106"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.667086 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:35:45 crc kubenswrapper[4711]: W1202 10:35:45.671239 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc513475d_590a_4821_9ee5_894e9faaef88.slice/crio-285081805dcfba78ab36be13797de8acd438e27ca1a7840aa2be9ce709266860 WatchSource:0}: Error finding container 285081805dcfba78ab36be13797de8acd438e27ca1a7840aa2be9ce709266860: Status 404 returned error can't find the container with id 285081805dcfba78ab36be13797de8acd438e27ca1a7840aa2be9ce709266860 Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.693118 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjk2l\" (UniqueName: \"kubernetes.io/projected/d5a70ce3-0321-415c-8b3a-8b7cea271106-kube-api-access-rjk2l\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.693149 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a70ce3-0321-415c-8b3a-8b7cea271106-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.693159 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a70ce3-0321-415c-8b3a-8b7cea271106-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.716066 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c513475d-590a-4821-9ee5-894e9faaef88","Type":"ContainerStarted","Data":"285081805dcfba78ab36be13797de8acd438e27ca1a7840aa2be9ce709266860"} Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.721785 4711 generic.go:334] "Generic (PLEG): container finished" podID="d5a70ce3-0321-415c-8b3a-8b7cea271106" containerID="4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad" exitCode=0 Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.721839 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5a70ce3-0321-415c-8b3a-8b7cea271106","Type":"ContainerDied","Data":"4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad"} Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.721872 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5a70ce3-0321-415c-8b3a-8b7cea271106","Type":"ContainerDied","Data":"c791f8a443ea3493fe91f933490380bf910e00fd2afff5caeff0bdbec04546a1"} Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.721870 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.721903 4711 scope.go:117] "RemoveContainer" containerID="4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.759306 4711 scope.go:117] "RemoveContainer" containerID="4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad" Dec 02 10:35:45 crc kubenswrapper[4711]: E1202 10:35:45.759731 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad\": container with ID starting with 4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad not found: ID does not exist" containerID="4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.759781 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad"} err="failed to get container status \"4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad\": rpc error: code = NotFound desc = could not find container \"4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad\": container with ID starting with 4ab0d6ba50c3186a893abcb6d27c391af2184fa74e028bbe1d1bbe5027e23cad not found: ID does not exist" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.761728 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.787653 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.802201 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:35:45 crc kubenswrapper[4711]: E1202 10:35:45.802638 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a70ce3-0321-415c-8b3a-8b7cea271106" containerName="nova-scheduler-scheduler" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.802655 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a70ce3-0321-415c-8b3a-8b7cea271106" containerName="nova-scheduler-scheduler" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.802858 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a70ce3-0321-415c-8b3a-8b7cea271106" containerName="nova-scheduler-scheduler" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.803509 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.806781 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.811626 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.896004 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426aabb1-9d66-4797-8fd2-3ecf4074192e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"426aabb1-9d66-4797-8fd2-3ecf4074192e\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.896061 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426aabb1-9d66-4797-8fd2-3ecf4074192e-config-data\") pod \"nova-scheduler-0\" (UID: \"426aabb1-9d66-4797-8fd2-3ecf4074192e\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.896118 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfflt\" (UniqueName: \"kubernetes.io/projected/426aabb1-9d66-4797-8fd2-3ecf4074192e-kube-api-access-tfflt\") pod \"nova-scheduler-0\" (UID: \"426aabb1-9d66-4797-8fd2-3ecf4074192e\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.998135 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426aabb1-9d66-4797-8fd2-3ecf4074192e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"426aabb1-9d66-4797-8fd2-3ecf4074192e\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.998210 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426aabb1-9d66-4797-8fd2-3ecf4074192e-config-data\") pod \"nova-scheduler-0\" (UID: \"426aabb1-9d66-4797-8fd2-3ecf4074192e\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:45 crc kubenswrapper[4711]: I1202 10:35:45.998280 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfflt\" (UniqueName: \"kubernetes.io/projected/426aabb1-9d66-4797-8fd2-3ecf4074192e-kube-api-access-tfflt\") pod \"nova-scheduler-0\" (UID: \"426aabb1-9d66-4797-8fd2-3ecf4074192e\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:46 crc kubenswrapper[4711]: I1202 10:35:46.003733 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426aabb1-9d66-4797-8fd2-3ecf4074192e-config-data\") pod \"nova-scheduler-0\" (UID: \"426aabb1-9d66-4797-8fd2-3ecf4074192e\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:46 crc kubenswrapper[4711]: I1202 10:35:46.003789 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426aabb1-9d66-4797-8fd2-3ecf4074192e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"426aabb1-9d66-4797-8fd2-3ecf4074192e\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:46 crc kubenswrapper[4711]: I1202 10:35:46.017641 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfflt\" (UniqueName: \"kubernetes.io/projected/426aabb1-9d66-4797-8fd2-3ecf4074192e-kube-api-access-tfflt\") pod \"nova-scheduler-0\" (UID: \"426aabb1-9d66-4797-8fd2-3ecf4074192e\") " pod="openstack/nova-scheduler-0" Dec 02 10:35:46 crc kubenswrapper[4711]: I1202 10:35:46.133415 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:35:46 crc kubenswrapper[4711]: I1202 10:35:46.619874 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:35:46 crc kubenswrapper[4711]: I1202 10:35:46.736574 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"426aabb1-9d66-4797-8fd2-3ecf4074192e","Type":"ContainerStarted","Data":"452cee090cdaf3774f2b3ab2760e079445edc181762662b46c90bae89c340b7b"} Dec 02 10:35:46 crc kubenswrapper[4711]: I1202 10:35:46.739553 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c513475d-590a-4821-9ee5-894e9faaef88","Type":"ContainerStarted","Data":"ec461fa65cd7bc6642a85af95311bae59508dee92500b7e3953d4f733907931d"} Dec 02 10:35:46 crc kubenswrapper[4711]: I1202 10:35:46.739581 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c513475d-590a-4821-9ee5-894e9faaef88","Type":"ContainerStarted","Data":"79ad65afaea8564b8b28c985fe380f8321d723e0dea42c216ca960ea77d20345"} Dec 02 10:35:46 crc kubenswrapper[4711]: I1202 10:35:46.770332 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.770269862 podStartE2EDuration="2.770269862s" podCreationTimestamp="2025-12-02 10:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:46.758643557 +0000 UTC m=+1336.468010034" watchObservedRunningTime="2025-12-02 10:35:46.770269862 +0000 UTC m=+1336.479636339" Dec 02 10:35:47 crc kubenswrapper[4711]: I1202 10:35:47.092318 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5a70ce3-0321-415c-8b3a-8b7cea271106" path="/var/lib/kubelet/pods/d5a70ce3-0321-415c-8b3a-8b7cea271106/volumes" Dec 02 10:35:47 crc kubenswrapper[4711]: I1202 10:35:47.757427 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"426aabb1-9d66-4797-8fd2-3ecf4074192e","Type":"ContainerStarted","Data":"d643942ba85d336b95c01544cc45e1b57818f3482b39feb0e26be33f0c423b69"} Dec 02 10:35:47 crc kubenswrapper[4711]: I1202 10:35:47.778831 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.778808349 podStartE2EDuration="2.778808349s" podCreationTimestamp="2025-12-02 10:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:47.776007493 +0000 UTC m=+1337.485373970" watchObservedRunningTime="2025-12-02 10:35:47.778808349 +0000 UTC m=+1337.488174806" Dec 02 10:35:50 crc kubenswrapper[4711]: I1202 10:35:50.145348 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 10:35:50 crc kubenswrapper[4711]: I1202 10:35:50.145610 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 10:35:51 crc kubenswrapper[4711]: I1202 10:35:51.133962 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 10:35:52 crc kubenswrapper[4711]: I1202 10:35:52.043252 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:35:52 crc kubenswrapper[4711]: I1202 10:35:52.043615 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:35:52 crc kubenswrapper[4711]: I1202 10:35:52.586572 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:35:52 crc kubenswrapper[4711]: I1202 10:35:52.586652 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:35:53 crc kubenswrapper[4711]: I1202 10:35:53.059104 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d6a280ba-4feb-4ffd-8452-a4e7d2c6512b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 10:35:53 crc kubenswrapper[4711]: I1202 10:35:53.059395 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d6a280ba-4feb-4ffd-8452-a4e7d2c6512b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 10:35:55 crc kubenswrapper[4711]: I1202 10:35:55.145585 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 10:35:55 crc kubenswrapper[4711]: I1202 10:35:55.145964 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 10:35:56 crc kubenswrapper[4711]: I1202 10:35:56.133541 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 10:35:56 crc kubenswrapper[4711]: I1202 10:35:56.157212 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c513475d-590a-4821-9ee5-894e9faaef88" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 10:35:56 crc kubenswrapper[4711]: I1202 10:35:56.157211 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c513475d-590a-4821-9ee5-894e9faaef88" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 10:35:56 crc kubenswrapper[4711]: I1202 10:35:56.175212 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 10:35:56 crc kubenswrapper[4711]: I1202 10:35:56.896645 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4711]: I1202 10:35:59.908925 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 02 10:36:02 crc kubenswrapper[4711]: I1202 10:36:02.050703 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 10:36:02 crc kubenswrapper[4711]: I1202 10:36:02.051457 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 10:36:02 crc kubenswrapper[4711]: I1202 10:36:02.054813 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 10:36:02 crc kubenswrapper[4711]: I1202 10:36:02.061402 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 10:36:02 crc kubenswrapper[4711]: I1202 10:36:02.919432 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 10:36:02 crc kubenswrapper[4711]: I1202 10:36:02.925890 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 10:36:05 crc kubenswrapper[4711]: I1202 10:36:05.152514 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 10:36:05 crc kubenswrapper[4711]: I1202 10:36:05.154130 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 10:36:05 crc kubenswrapper[4711]: I1202 10:36:05.158207 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 10:36:05 crc kubenswrapper[4711]: I1202 10:36:05.959351 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 10:36:14 crc kubenswrapper[4711]: I1202 10:36:14.581575 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:36:15 crc kubenswrapper[4711]: I1202 10:36:15.395140 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:36:18 crc kubenswrapper[4711]: I1202 10:36:18.350111 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" containerName="rabbitmq" containerID="cri-o://245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba" gracePeriod=604797 Dec 02 10:36:19 crc kubenswrapper[4711]: I1202 10:36:19.391910 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="cdbcea35-5752-4be6-a7db-0f3aa362be58" containerName="rabbitmq" containerID="cri-o://2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898" gracePeriod=604797 Dec 02 10:36:22 crc kubenswrapper[4711]: I1202 10:36:22.508056 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 02 10:36:22 crc kubenswrapper[4711]: I1202 10:36:22.586321 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:36:22 crc kubenswrapper[4711]: I1202 10:36:22.586441 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:36:22 crc kubenswrapper[4711]: I1202 10:36:22.586503 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:36:22 crc kubenswrapper[4711]: I1202 10:36:22.587160 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa264d5b0f373424df2b67d6e79de2f6c80da037caa0a7a377debbcb2ad5e375"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:36:22 crc kubenswrapper[4711]: I1202 10:36:22.587491 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://aa264d5b0f373424df2b67d6e79de2f6c80da037caa0a7a377debbcb2ad5e375" gracePeriod=600 Dec 02 10:36:22 crc kubenswrapper[4711]: I1202 10:36:22.895496 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="cdbcea35-5752-4be6-a7db-0f3aa362be58" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Dec 02 10:36:23 crc kubenswrapper[4711]: I1202 10:36:23.117268 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="aa264d5b0f373424df2b67d6e79de2f6c80da037caa0a7a377debbcb2ad5e375" exitCode=0 Dec 02 10:36:23 crc kubenswrapper[4711]: I1202 10:36:23.117354 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"aa264d5b0f373424df2b67d6e79de2f6c80da037caa0a7a377debbcb2ad5e375"} Dec 02 10:36:23 crc kubenswrapper[4711]: I1202 10:36:23.117549 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c"} Dec 02 10:36:23 crc kubenswrapper[4711]: I1202 10:36:23.117593 4711 scope.go:117] "RemoveContainer" containerID="7b9ab21e8bb7413840e645c998ba8a37411c45606ceeecfb5d6d1574a7966068" Dec 02 10:36:24 crc kubenswrapper[4711]: I1202 10:36:24.968563 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.130571 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr6kw\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-kube-api-access-mr6kw\") pod \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.130860 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-tls\") pod \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.130962 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-server-conf\") pod \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.130986 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-erlang-cookie-secret\") pod \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.131003 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-erlang-cookie\") pod \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.131074 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-pod-info\") pod \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.131158 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.131206 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-plugins-conf\") pod \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.131227 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-config-data\") pod \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.131247 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-plugins\") pod \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.131274 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-confd\") pod \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\" (UID: \"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29\") " Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.131587 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" (UID: "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.131728 4711 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.131987 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" (UID: "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.132366 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" (UID: "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.152810 4711 generic.go:334] "Generic (PLEG): container finished" podID="3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" containerID="245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba" exitCode=0 Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.152858 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29","Type":"ContainerDied","Data":"245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba"} Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.152886 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29","Type":"ContainerDied","Data":"516840f2de76f0c26d4d8ec8505bd564b9d6063845c110a5ebd53e69e166f07c"} Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.152902 4711 scope.go:117] "RemoveContainer" containerID="245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.153099 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.155179 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" (UID: "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.158106 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" (UID: "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.161538 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-pod-info" (OuterVolumeSpecName: "pod-info") pod "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" (UID: "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.163879 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-kube-api-access-mr6kw" (OuterVolumeSpecName: "kube-api-access-mr6kw") pod "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" (UID: "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29"). InnerVolumeSpecName "kube-api-access-mr6kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.188695 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" (UID: "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.210660 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-config-data" (OuterVolumeSpecName: "config-data") pod "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" (UID: "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.221196 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-server-conf" (OuterVolumeSpecName: "server-conf") pod "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" (UID: "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.233838 4711 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-pod-info\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.233915 4711 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.233933 4711 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.233944 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.233977 4711 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.233988 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr6kw\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-kube-api-access-mr6kw\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.233997 4711 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.234007 4711 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-server-conf\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.234016 4711 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.270251 4711 scope.go:117] "RemoveContainer" containerID="59b11b593094b689e9ee147410d78c22e9f6fc07d27944727720d80cb5f1e8c4" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.285937 4711 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.297692 4711 scope.go:117] "RemoveContainer" containerID="245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba" Dec 02 10:36:25 crc kubenswrapper[4711]: E1202 10:36:25.298318 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba\": container with ID starting with 245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba not found: ID does not exist" containerID="245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.298433 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba"} err="failed to get container status \"245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba\": rpc error: code = NotFound desc = could not find container \"245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba\": container with ID starting with 245fc5489464bd1f183572da51d5370b2eacb7e904a4a07dad7b5c6a9ce5b7ba not found: ID does not exist" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.298514 4711 scope.go:117] "RemoveContainer" containerID="59b11b593094b689e9ee147410d78c22e9f6fc07d27944727720d80cb5f1e8c4" Dec 02 10:36:25 crc kubenswrapper[4711]: E1202 10:36:25.301418 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59b11b593094b689e9ee147410d78c22e9f6fc07d27944727720d80cb5f1e8c4\": container with ID starting with 59b11b593094b689e9ee147410d78c22e9f6fc07d27944727720d80cb5f1e8c4 not found: ID does not exist" containerID="59b11b593094b689e9ee147410d78c22e9f6fc07d27944727720d80cb5f1e8c4" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.301461 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59b11b593094b689e9ee147410d78c22e9f6fc07d27944727720d80cb5f1e8c4"} err="failed to get container status \"59b11b593094b689e9ee147410d78c22e9f6fc07d27944727720d80cb5f1e8c4\": rpc error: code = NotFound desc = could not find container \"59b11b593094b689e9ee147410d78c22e9f6fc07d27944727720d80cb5f1e8c4\": container with ID starting with 59b11b593094b689e9ee147410d78c22e9f6fc07d27944727720d80cb5f1e8c4 not found: ID does not exist" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.303874 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" (UID: "3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.336052 4711 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.336082 4711 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.489195 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.507543 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.531843 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:36:25 crc kubenswrapper[4711]: E1202 10:36:25.532363 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" containerName="rabbitmq" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.532404 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" containerName="rabbitmq" Dec 02 10:36:25 crc kubenswrapper[4711]: E1202 10:36:25.532425 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" containerName="setup-container" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.532434 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" containerName="setup-container" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.532694 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" containerName="rabbitmq" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.534433 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.538376 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.538438 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-msfjh" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.538486 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.538492 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.538547 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.543165 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.546979 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.565599 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.644222 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.644287 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.644327 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.644372 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-config-data\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.644551 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.644575 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.644590 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzpbv\" (UniqueName: \"kubernetes.io/projected/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-kube-api-access-fzpbv\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.644606 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.644626 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.644916 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.645074 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.747269 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.747344 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.747406 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.747480 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-config-data\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.747599 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.747646 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.747675 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzpbv\" (UniqueName: \"kubernetes.io/projected/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-kube-api-access-fzpbv\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.747705 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.747746 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.747798 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.747857 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.748317 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.748406 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.748635 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-config-data\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.751815 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.752738 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.752782 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.753301 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.754335 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.759908 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.762647 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.772079 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzpbv\" (UniqueName: \"kubernetes.io/projected/9b7fae8d-6b42-4c76-b0a0-74004c2e5e47-kube-api-access-fzpbv\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.788437 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47\") " pod="openstack/rabbitmq-server-0" Dec 02 10:36:25 crc kubenswrapper[4711]: I1202 10:36:25.860774 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.021820 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.162582 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cdbcea35-5752-4be6-a7db-0f3aa362be58\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.162843 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdbcea35-5752-4be6-a7db-0f3aa362be58-erlang-cookie-secret\") pod \"cdbcea35-5752-4be6-a7db-0f3aa362be58\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.162907 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h4gj\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-kube-api-access-2h4gj\") pod \"cdbcea35-5752-4be6-a7db-0f3aa362be58\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.163684 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-config-data\") pod \"cdbcea35-5752-4be6-a7db-0f3aa362be58\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.163838 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-plugins-conf\") pod \"cdbcea35-5752-4be6-a7db-0f3aa362be58\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.163867 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-server-conf\") pod \"cdbcea35-5752-4be6-a7db-0f3aa362be58\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.164023 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-erlang-cookie\") pod \"cdbcea35-5752-4be6-a7db-0f3aa362be58\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.164120 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-plugins\") pod \"cdbcea35-5752-4be6-a7db-0f3aa362be58\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.164198 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdbcea35-5752-4be6-a7db-0f3aa362be58-pod-info\") pod \"cdbcea35-5752-4be6-a7db-0f3aa362be58\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.164270 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-confd\") pod \"cdbcea35-5752-4be6-a7db-0f3aa362be58\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.164381 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-tls\") pod \"cdbcea35-5752-4be6-a7db-0f3aa362be58\" (UID: \"cdbcea35-5752-4be6-a7db-0f3aa362be58\") " Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.164891 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cdbcea35-5752-4be6-a7db-0f3aa362be58" (UID: "cdbcea35-5752-4be6-a7db-0f3aa362be58"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.165423 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cdbcea35-5752-4be6-a7db-0f3aa362be58" (UID: "cdbcea35-5752-4be6-a7db-0f3aa362be58"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.165487 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cdbcea35-5752-4be6-a7db-0f3aa362be58" (UID: "cdbcea35-5752-4be6-a7db-0f3aa362be58"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.166092 4711 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.166113 4711 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.166124 4711 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.168834 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdbcea35-5752-4be6-a7db-0f3aa362be58-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cdbcea35-5752-4be6-a7db-0f3aa362be58" (UID: "cdbcea35-5752-4be6-a7db-0f3aa362be58"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.169671 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-kube-api-access-2h4gj" (OuterVolumeSpecName: "kube-api-access-2h4gj") pod "cdbcea35-5752-4be6-a7db-0f3aa362be58" (UID: "cdbcea35-5752-4be6-a7db-0f3aa362be58"). InnerVolumeSpecName "kube-api-access-2h4gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.175276 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cdbcea35-5752-4be6-a7db-0f3aa362be58" (UID: "cdbcea35-5752-4be6-a7db-0f3aa362be58"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.179258 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cdbcea35-5752-4be6-a7db-0f3aa362be58-pod-info" (OuterVolumeSpecName: "pod-info") pod "cdbcea35-5752-4be6-a7db-0f3aa362be58" (UID: "cdbcea35-5752-4be6-a7db-0f3aa362be58"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.183725 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "cdbcea35-5752-4be6-a7db-0f3aa362be58" (UID: "cdbcea35-5752-4be6-a7db-0f3aa362be58"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.195218 4711 generic.go:334] "Generic (PLEG): container finished" podID="cdbcea35-5752-4be6-a7db-0f3aa362be58" containerID="2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898" exitCode=0 Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.195358 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.195972 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cdbcea35-5752-4be6-a7db-0f3aa362be58","Type":"ContainerDied","Data":"2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898"} Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.196069 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cdbcea35-5752-4be6-a7db-0f3aa362be58","Type":"ContainerDied","Data":"4dc5b5d5cc462c13394e27631c065f2e1a47448fb395e66251601a9e242e91d0"} Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.196132 4711 scope.go:117] "RemoveContainer" containerID="2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.199513 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-config-data" (OuterVolumeSpecName: "config-data") pod "cdbcea35-5752-4be6-a7db-0f3aa362be58" (UID: "cdbcea35-5752-4be6-a7db-0f3aa362be58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.220025 4711 scope.go:117] "RemoveContainer" containerID="8cff31e19396fc956bdae49bc1df5c315bf83e71499fff0a4ffd8e4bd7158fb7" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.224465 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-server-conf" (OuterVolumeSpecName: "server-conf") pod "cdbcea35-5752-4be6-a7db-0f3aa362be58" (UID: "cdbcea35-5752-4be6-a7db-0f3aa362be58"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.243023 4711 scope.go:117] "RemoveContainer" containerID="2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898" Dec 02 10:36:26 crc kubenswrapper[4711]: E1202 10:36:26.243440 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898\": container with ID starting with 2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898 not found: ID does not exist" containerID="2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.243470 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898"} err="failed to get container status \"2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898\": rpc error: code = NotFound desc = could not find container \"2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898\": container with ID starting with 2134a180aea42b02f04c00e78f4641461c10fe012e21b05a28960bce7a55f898 not found: ID does not exist" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.243495 4711 scope.go:117] "RemoveContainer" containerID="8cff31e19396fc956bdae49bc1df5c315bf83e71499fff0a4ffd8e4bd7158fb7" Dec 02 10:36:26 crc kubenswrapper[4711]: E1202 10:36:26.243876 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cff31e19396fc956bdae49bc1df5c315bf83e71499fff0a4ffd8e4bd7158fb7\": container with ID starting with 8cff31e19396fc956bdae49bc1df5c315bf83e71499fff0a4ffd8e4bd7158fb7 not found: ID does not exist" containerID="8cff31e19396fc956bdae49bc1df5c315bf83e71499fff0a4ffd8e4bd7158fb7" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.243898 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cff31e19396fc956bdae49bc1df5c315bf83e71499fff0a4ffd8e4bd7158fb7"} err="failed to get container status \"8cff31e19396fc956bdae49bc1df5c315bf83e71499fff0a4ffd8e4bd7158fb7\": rpc error: code = NotFound desc = could not find container \"8cff31e19396fc956bdae49bc1df5c315bf83e71499fff0a4ffd8e4bd7158fb7\": container with ID starting with 8cff31e19396fc956bdae49bc1df5c315bf83e71499fff0a4ffd8e4bd7158fb7 not found: ID does not exist" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.268408 4711 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdbcea35-5752-4be6-a7db-0f3aa362be58-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.268444 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h4gj\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-kube-api-access-2h4gj\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.268460 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.268473 4711 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdbcea35-5752-4be6-a7db-0f3aa362be58-server-conf\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.268484 4711 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdbcea35-5752-4be6-a7db-0f3aa362be58-pod-info\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.268495 4711 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.268534 4711 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.291971 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cdbcea35-5752-4be6-a7db-0f3aa362be58" (UID: "cdbcea35-5752-4be6-a7db-0f3aa362be58"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.295906 4711 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.369886 4711 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.369919 4711 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdbcea35-5752-4be6-a7db-0f3aa362be58-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.424551 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:36:26 crc kubenswrapper[4711]: W1202 10:36:26.428389 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b7fae8d_6b42_4c76_b0a0_74004c2e5e47.slice/crio-9af710b69aee135687f7b49f228da8305d56ab1168b4f96587f60338768fe845 WatchSource:0}: Error finding container 9af710b69aee135687f7b49f228da8305d56ab1168b4f96587f60338768fe845: Status 404 returned error can't find the container with id 9af710b69aee135687f7b49f228da8305d56ab1168b4f96587f60338768fe845 Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.533702 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.549589 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.573072 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:36:26 crc kubenswrapper[4711]: E1202 10:36:26.573508 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbcea35-5752-4be6-a7db-0f3aa362be58" containerName="setup-container" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.573529 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbcea35-5752-4be6-a7db-0f3aa362be58" containerName="setup-container" Dec 02 10:36:26 crc kubenswrapper[4711]: E1202 10:36:26.573550 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbcea35-5752-4be6-a7db-0f3aa362be58" containerName="rabbitmq" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.573559 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbcea35-5752-4be6-a7db-0f3aa362be58" containerName="rabbitmq" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.573805 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbcea35-5752-4be6-a7db-0f3aa362be58" containerName="rabbitmq" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.576660 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.580653 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.580692 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.580709 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.580692 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.581153 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.581435 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.583169 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4sctb" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.589406 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.675453 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.675526 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b642dce9-6793-46ab-9d8a-061c21e965ce-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.675565 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b642dce9-6793-46ab-9d8a-061c21e965ce-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.675587 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b642dce9-6793-46ab-9d8a-061c21e965ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.675605 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b642dce9-6793-46ab-9d8a-061c21e965ce-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.675621 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b642dce9-6793-46ab-9d8a-061c21e965ce-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.675654 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b642dce9-6793-46ab-9d8a-061c21e965ce-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.675669 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pwpf\" (UniqueName: \"kubernetes.io/projected/b642dce9-6793-46ab-9d8a-061c21e965ce-kube-api-access-2pwpf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.675682 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b642dce9-6793-46ab-9d8a-061c21e965ce-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.675695 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b642dce9-6793-46ab-9d8a-061c21e965ce-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.675716 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b642dce9-6793-46ab-9d8a-061c21e965ce-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.777025 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b642dce9-6793-46ab-9d8a-061c21e965ce-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.777102 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b642dce9-6793-46ab-9d8a-061c21e965ce-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.777130 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b642dce9-6793-46ab-9d8a-061c21e965ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.777162 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b642dce9-6793-46ab-9d8a-061c21e965ce-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.777187 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b642dce9-6793-46ab-9d8a-061c21e965ce-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.777258 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b642dce9-6793-46ab-9d8a-061c21e965ce-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.777281 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b642dce9-6793-46ab-9d8a-061c21e965ce-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.777299 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b642dce9-6793-46ab-9d8a-061c21e965ce-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.777319 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pwpf\" (UniqueName: \"kubernetes.io/projected/b642dce9-6793-46ab-9d8a-061c21e965ce-kube-api-access-2pwpf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.777349 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b642dce9-6793-46ab-9d8a-061c21e965ce-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.777437 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.777723 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.780227 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b642dce9-6793-46ab-9d8a-061c21e965ce-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.780486 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b642dce9-6793-46ab-9d8a-061c21e965ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.780635 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b642dce9-6793-46ab-9d8a-061c21e965ce-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.782545 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b642dce9-6793-46ab-9d8a-061c21e965ce-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.783698 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b642dce9-6793-46ab-9d8a-061c21e965ce-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.784233 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b642dce9-6793-46ab-9d8a-061c21e965ce-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.785044 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b642dce9-6793-46ab-9d8a-061c21e965ce-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.785559 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b642dce9-6793-46ab-9d8a-061c21e965ce-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.786564 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b642dce9-6793-46ab-9d8a-061c21e965ce-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.799779 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pwpf\" (UniqueName: \"kubernetes.io/projected/b642dce9-6793-46ab-9d8a-061c21e965ce-kube-api-access-2pwpf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.813373 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b642dce9-6793-46ab-9d8a-061c21e965ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:26 crc kubenswrapper[4711]: I1202 10:36:26.986429 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:36:27 crc kubenswrapper[4711]: I1202 10:36:27.089973 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29" path="/var/lib/kubelet/pods/3c6d3f5e-cea2-4bbf-a955-6bcbfbfa3a29/volumes" Dec 02 10:36:27 crc kubenswrapper[4711]: I1202 10:36:27.091203 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdbcea35-5752-4be6-a7db-0f3aa362be58" path="/var/lib/kubelet/pods/cdbcea35-5752-4be6-a7db-0f3aa362be58/volumes" Dec 02 10:36:27 crc kubenswrapper[4711]: I1202 10:36:27.210726 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47","Type":"ContainerStarted","Data":"9af710b69aee135687f7b49f228da8305d56ab1168b4f96587f60338768fe845"} Dec 02 10:36:27 crc kubenswrapper[4711]: I1202 10:36:27.487309 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:36:27 crc kubenswrapper[4711]: W1202 10:36:27.498681 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb642dce9_6793_46ab_9d8a_061c21e965ce.slice/crio-d4f65e61b5503bf73ecb8860330c03ba5a19e94f8e80d145538f3ff603a332ef WatchSource:0}: Error finding container d4f65e61b5503bf73ecb8860330c03ba5a19e94f8e80d145538f3ff603a332ef: Status 404 returned error can't find the container with id d4f65e61b5503bf73ecb8860330c03ba5a19e94f8e80d145538f3ff603a332ef Dec 02 10:36:28 crc kubenswrapper[4711]: I1202 10:36:28.219368 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47","Type":"ContainerStarted","Data":"b721ebfb74df5fb24fb93e40e5422b04d46afa06c7e25e863972cd95c0653fd4"} Dec 02 10:36:28 crc kubenswrapper[4711]: I1202 10:36:28.220439 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b642dce9-6793-46ab-9d8a-061c21e965ce","Type":"ContainerStarted","Data":"d4f65e61b5503bf73ecb8860330c03ba5a19e94f8e80d145538f3ff603a332ef"} Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.237991 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b642dce9-6793-46ab-9d8a-061c21e965ce","Type":"ContainerStarted","Data":"d3d5819a0b8fffd457478e0f77071e53fec85d6907a855c4c0bc92dc3a7dda30"} Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.610086 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-d2tmf"] Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.611737 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.614822 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.623193 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-d2tmf"] Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.673075 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.673209 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.674019 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.674149 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnmcp\" (UniqueName: \"kubernetes.io/projected/1d3b50ef-f07a-43cb-8bad-1375aec48f67-kube-api-access-mnmcp\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.674274 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-config\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.674445 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-dns-svc\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.674497 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.776137 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.776211 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnmcp\" (UniqueName: \"kubernetes.io/projected/1d3b50ef-f07a-43cb-8bad-1375aec48f67-kube-api-access-mnmcp\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.776268 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-config\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.776342 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-dns-svc\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.776374 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.776442 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.776475 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.777559 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.777864 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-config\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.777984 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.778262 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-dns-svc\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.778407 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.778510 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.795468 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnmcp\" (UniqueName: \"kubernetes.io/projected/1d3b50ef-f07a-43cb-8bad-1375aec48f67-kube-api-access-mnmcp\") pod \"dnsmasq-dns-d558885bc-d2tmf\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:30 crc kubenswrapper[4711]: I1202 10:36:30.937129 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:31 crc kubenswrapper[4711]: I1202 10:36:31.409683 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-d2tmf"] Dec 02 10:36:32 crc kubenswrapper[4711]: I1202 10:36:32.264458 4711 generic.go:334] "Generic (PLEG): container finished" podID="1d3b50ef-f07a-43cb-8bad-1375aec48f67" containerID="df3b911ea3ec4dba80dd8fed30444c496c1fca27b91b5dfd321bbae0244902a5" exitCode=0 Dec 02 10:36:32 crc kubenswrapper[4711]: I1202 10:36:32.264580 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" event={"ID":"1d3b50ef-f07a-43cb-8bad-1375aec48f67","Type":"ContainerDied","Data":"df3b911ea3ec4dba80dd8fed30444c496c1fca27b91b5dfd321bbae0244902a5"} Dec 02 10:36:32 crc kubenswrapper[4711]: I1202 10:36:32.264791 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" event={"ID":"1d3b50ef-f07a-43cb-8bad-1375aec48f67","Type":"ContainerStarted","Data":"ad77e3b2e5ac406d84ec9a332506b3097ba74b6cfcaa47a89822a24a55621e61"} Dec 02 10:36:33 crc kubenswrapper[4711]: I1202 10:36:33.277181 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" event={"ID":"1d3b50ef-f07a-43cb-8bad-1375aec48f67","Type":"ContainerStarted","Data":"9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d"} Dec 02 10:36:33 crc kubenswrapper[4711]: I1202 10:36:33.277674 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:33 crc kubenswrapper[4711]: I1202 10:36:33.304780 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" podStartSLOduration=3.304734997 podStartE2EDuration="3.304734997s" podCreationTimestamp="2025-12-02 10:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:36:33.297108509 +0000 UTC m=+1383.006474956" watchObservedRunningTime="2025-12-02 10:36:33.304734997 +0000 UTC m=+1383.014101444" Dec 02 10:36:40 crc kubenswrapper[4711]: I1202 10:36:40.938887 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.037840 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-mwvf8"] Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.038112 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" podUID="a450e16e-93e0-4525-8514-f101cc87ae8b" containerName="dnsmasq-dns" containerID="cri-o://e3b3c062d14e84cb43d5d443ab2c46bb79aa537c709cea928c4a78d1cd0beb19" gracePeriod=10 Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.183946 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-59gqw"] Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.187366 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.195259 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-59gqw"] Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.363646 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.363691 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.363757 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-config\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.363803 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j64kr\" (UniqueName: \"kubernetes.io/projected/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-kube-api-access-j64kr\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.363829 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.363847 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.363864 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.382034 4711 generic.go:334] "Generic (PLEG): container finished" podID="a450e16e-93e0-4525-8514-f101cc87ae8b" containerID="e3b3c062d14e84cb43d5d443ab2c46bb79aa537c709cea928c4a78d1cd0beb19" exitCode=0 Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.382080 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" event={"ID":"a450e16e-93e0-4525-8514-f101cc87ae8b","Type":"ContainerDied","Data":"e3b3c062d14e84cb43d5d443ab2c46bb79aa537c709cea928c4a78d1cd0beb19"} Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.466204 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.466610 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.466680 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-config\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.466729 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j64kr\" (UniqueName: \"kubernetes.io/projected/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-kube-api-access-j64kr\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.466758 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.466782 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.466798 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.467575 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.467722 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.468253 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.468314 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.468348 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-config\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.469449 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.500434 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j64kr\" (UniqueName: \"kubernetes.io/projected/e33eabd6-6a5d-4d49-b0db-3d31fcb6f171-kube-api-access-j64kr\") pod \"dnsmasq-dns-78c64bc9c5-59gqw\" (UID: \"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171\") " pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.537405 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.699776 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.901067 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-dns-swift-storage-0\") pod \"a450e16e-93e0-4525-8514-f101cc87ae8b\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.901221 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-ovsdbserver-nb\") pod \"a450e16e-93e0-4525-8514-f101cc87ae8b\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.901301 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-config\") pod \"a450e16e-93e0-4525-8514-f101cc87ae8b\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.901408 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zrff\" (UniqueName: \"kubernetes.io/projected/a450e16e-93e0-4525-8514-f101cc87ae8b-kube-api-access-5zrff\") pod \"a450e16e-93e0-4525-8514-f101cc87ae8b\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.901565 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-dns-svc\") pod \"a450e16e-93e0-4525-8514-f101cc87ae8b\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.901725 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-ovsdbserver-sb\") pod \"a450e16e-93e0-4525-8514-f101cc87ae8b\" (UID: \"a450e16e-93e0-4525-8514-f101cc87ae8b\") " Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.905798 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-59gqw"] Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.908293 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a450e16e-93e0-4525-8514-f101cc87ae8b-kube-api-access-5zrff" (OuterVolumeSpecName: "kube-api-access-5zrff") pod "a450e16e-93e0-4525-8514-f101cc87ae8b" (UID: "a450e16e-93e0-4525-8514-f101cc87ae8b"). InnerVolumeSpecName "kube-api-access-5zrff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.988945 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a450e16e-93e0-4525-8514-f101cc87ae8b" (UID: "a450e16e-93e0-4525-8514-f101cc87ae8b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:41 crc kubenswrapper[4711]: I1202 10:36:41.998480 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a450e16e-93e0-4525-8514-f101cc87ae8b" (UID: "a450e16e-93e0-4525-8514-f101cc87ae8b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.003231 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-config" (OuterVolumeSpecName: "config") pod "a450e16e-93e0-4525-8514-f101cc87ae8b" (UID: "a450e16e-93e0-4525-8514-f101cc87ae8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.004887 4711 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.004915 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.004924 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zrff\" (UniqueName: \"kubernetes.io/projected/a450e16e-93e0-4525-8514-f101cc87ae8b-kube-api-access-5zrff\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.004934 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.007885 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a450e16e-93e0-4525-8514-f101cc87ae8b" (UID: "a450e16e-93e0-4525-8514-f101cc87ae8b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.014468 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a450e16e-93e0-4525-8514-f101cc87ae8b" (UID: "a450e16e-93e0-4525-8514-f101cc87ae8b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.107155 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.107409 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a450e16e-93e0-4525-8514-f101cc87ae8b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.391439 4711 generic.go:334] "Generic (PLEG): container finished" podID="e33eabd6-6a5d-4d49-b0db-3d31fcb6f171" containerID="12f1c7d687ba9717c7fc5e74fd07941a8b5144bd2c8d2883ee9eb62c70adacfe" exitCode=0 Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.391522 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" event={"ID":"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171","Type":"ContainerDied","Data":"12f1c7d687ba9717c7fc5e74fd07941a8b5144bd2c8d2883ee9eb62c70adacfe"} Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.391595 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" event={"ID":"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171","Type":"ContainerStarted","Data":"5ad8175de0f2d4adf032d1f6a5326503932168f9603376480b5c0d3579fb69f9"} Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.393849 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" event={"ID":"a450e16e-93e0-4525-8514-f101cc87ae8b","Type":"ContainerDied","Data":"b9f4627485a6df0e8b58bdd1292a8edbe32d5aaa5ebbe38b68e8ce59feb5b898"} Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.393890 4711 scope.go:117] "RemoveContainer" containerID="e3b3c062d14e84cb43d5d443ab2c46bb79aa537c709cea928c4a78d1cd0beb19" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.393913 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-mwvf8" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.616612 4711 scope.go:117] "RemoveContainer" containerID="057ca564a653ae77cbd215e8fd11c68a706d7c71b950ef9a4e6362c8787da0ed" Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.648692 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-mwvf8"] Dec 02 10:36:42 crc kubenswrapper[4711]: I1202 10:36:42.656411 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-mwvf8"] Dec 02 10:36:43 crc kubenswrapper[4711]: I1202 10:36:43.091102 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a450e16e-93e0-4525-8514-f101cc87ae8b" path="/var/lib/kubelet/pods/a450e16e-93e0-4525-8514-f101cc87ae8b/volumes" Dec 02 10:36:43 crc kubenswrapper[4711]: I1202 10:36:43.413231 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" event={"ID":"e33eabd6-6a5d-4d49-b0db-3d31fcb6f171","Type":"ContainerStarted","Data":"c5b3286efc1a0409b07f8b2a146d72b0ca9faa135f6b42d25d7725fb9832693c"} Dec 02 10:36:43 crc kubenswrapper[4711]: I1202 10:36:43.413475 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:43 crc kubenswrapper[4711]: I1202 10:36:43.440701 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" podStartSLOduration=2.440681896 podStartE2EDuration="2.440681896s" podCreationTimestamp="2025-12-02 10:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:36:43.434009375 +0000 UTC m=+1393.143375842" watchObservedRunningTime="2025-12-02 10:36:43.440681896 +0000 UTC m=+1393.150048343" Dec 02 10:36:51 crc kubenswrapper[4711]: I1202 10:36:51.540278 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-59gqw" Dec 02 10:36:51 crc kubenswrapper[4711]: I1202 10:36:51.665041 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-d2tmf"] Dec 02 10:36:51 crc kubenswrapper[4711]: I1202 10:36:51.665285 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" podUID="1d3b50ef-f07a-43cb-8bad-1375aec48f67" containerName="dnsmasq-dns" containerID="cri-o://9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d" gracePeriod=10 Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.154222 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.303946 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-ovsdbserver-sb\") pod \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.304085 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-dns-swift-storage-0\") pod \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.304172 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-config\") pod \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.304195 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-ovsdbserver-nb\") pod \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.304260 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-openstack-edpm-ipam\") pod \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.304334 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-dns-svc\") pod \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.304402 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnmcp\" (UniqueName: \"kubernetes.io/projected/1d3b50ef-f07a-43cb-8bad-1375aec48f67-kube-api-access-mnmcp\") pod \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\" (UID: \"1d3b50ef-f07a-43cb-8bad-1375aec48f67\") " Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.322253 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3b50ef-f07a-43cb-8bad-1375aec48f67-kube-api-access-mnmcp" (OuterVolumeSpecName: "kube-api-access-mnmcp") pod "1d3b50ef-f07a-43cb-8bad-1375aec48f67" (UID: "1d3b50ef-f07a-43cb-8bad-1375aec48f67"). InnerVolumeSpecName "kube-api-access-mnmcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.356762 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d3b50ef-f07a-43cb-8bad-1375aec48f67" (UID: "1d3b50ef-f07a-43cb-8bad-1375aec48f67"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.364523 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-config" (OuterVolumeSpecName: "config") pod "1d3b50ef-f07a-43cb-8bad-1375aec48f67" (UID: "1d3b50ef-f07a-43cb-8bad-1375aec48f67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.366230 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "1d3b50ef-f07a-43cb-8bad-1375aec48f67" (UID: "1d3b50ef-f07a-43cb-8bad-1375aec48f67"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.367122 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d3b50ef-f07a-43cb-8bad-1375aec48f67" (UID: "1d3b50ef-f07a-43cb-8bad-1375aec48f67"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.377698 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d3b50ef-f07a-43cb-8bad-1375aec48f67" (UID: "1d3b50ef-f07a-43cb-8bad-1375aec48f67"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.382205 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d3b50ef-f07a-43cb-8bad-1375aec48f67" (UID: "1d3b50ef-f07a-43cb-8bad-1375aec48f67"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.406692 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.406737 4711 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.406753 4711 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.406766 4711 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.406778 4711 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.406790 4711 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d3b50ef-f07a-43cb-8bad-1375aec48f67-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.406802 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnmcp\" (UniqueName: \"kubernetes.io/projected/1d3b50ef-f07a-43cb-8bad-1375aec48f67-kube-api-access-mnmcp\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.515776 4711 generic.go:334] "Generic (PLEG): container finished" podID="1d3b50ef-f07a-43cb-8bad-1375aec48f67" containerID="9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d" exitCode=0 Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.515822 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" event={"ID":"1d3b50ef-f07a-43cb-8bad-1375aec48f67","Type":"ContainerDied","Data":"9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d"} Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.515840 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.515859 4711 scope.go:117] "RemoveContainer" containerID="9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.515850 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-d2tmf" event={"ID":"1d3b50ef-f07a-43cb-8bad-1375aec48f67","Type":"ContainerDied","Data":"ad77e3b2e5ac406d84ec9a332506b3097ba74b6cfcaa47a89822a24a55621e61"} Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.548144 4711 scope.go:117] "RemoveContainer" containerID="df3b911ea3ec4dba80dd8fed30444c496c1fca27b91b5dfd321bbae0244902a5" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.552824 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-d2tmf"] Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.565671 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-d2tmf"] Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.578025 4711 scope.go:117] "RemoveContainer" containerID="9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d" Dec 02 10:36:52 crc kubenswrapper[4711]: E1202 10:36:52.578610 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d\": container with ID starting with 9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d not found: ID does not exist" containerID="9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.578685 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d"} err="failed to get container status \"9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d\": rpc error: code = NotFound desc = could not find container \"9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d\": container with ID starting with 9c98d54776a9790a9bcc9859daf82a1950952dfb9d4737f8cbc6791cfb67b50d not found: ID does not exist" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.578714 4711 scope.go:117] "RemoveContainer" containerID="df3b911ea3ec4dba80dd8fed30444c496c1fca27b91b5dfd321bbae0244902a5" Dec 02 10:36:52 crc kubenswrapper[4711]: E1202 10:36:52.579424 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df3b911ea3ec4dba80dd8fed30444c496c1fca27b91b5dfd321bbae0244902a5\": container with ID starting with df3b911ea3ec4dba80dd8fed30444c496c1fca27b91b5dfd321bbae0244902a5 not found: ID does not exist" containerID="df3b911ea3ec4dba80dd8fed30444c496c1fca27b91b5dfd321bbae0244902a5" Dec 02 10:36:52 crc kubenswrapper[4711]: I1202 10:36:52.579456 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df3b911ea3ec4dba80dd8fed30444c496c1fca27b91b5dfd321bbae0244902a5"} err="failed to get container status \"df3b911ea3ec4dba80dd8fed30444c496c1fca27b91b5dfd321bbae0244902a5\": rpc error: code = NotFound desc = could not find container \"df3b911ea3ec4dba80dd8fed30444c496c1fca27b91b5dfd321bbae0244902a5\": container with ID starting with df3b911ea3ec4dba80dd8fed30444c496c1fca27b91b5dfd321bbae0244902a5 not found: ID does not exist" Dec 02 10:36:53 crc kubenswrapper[4711]: I1202 10:36:53.089487 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3b50ef-f07a-43cb-8bad-1375aec48f67" path="/var/lib/kubelet/pods/1d3b50ef-f07a-43cb-8bad-1375aec48f67/volumes" Dec 02 10:37:00 crc kubenswrapper[4711]: I1202 10:37:00.598705 4711 generic.go:334] "Generic (PLEG): container finished" podID="9b7fae8d-6b42-4c76-b0a0-74004c2e5e47" containerID="b721ebfb74df5fb24fb93e40e5422b04d46afa06c7e25e863972cd95c0653fd4" exitCode=0 Dec 02 10:37:00 crc kubenswrapper[4711]: I1202 10:37:00.598802 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47","Type":"ContainerDied","Data":"b721ebfb74df5fb24fb93e40e5422b04d46afa06c7e25e863972cd95c0653fd4"} Dec 02 10:37:01 crc kubenswrapper[4711]: I1202 10:37:01.612878 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b7fae8d-6b42-4c76-b0a0-74004c2e5e47","Type":"ContainerStarted","Data":"85fd90f4b472610f1a0f5832ee44ab17ddc5565d96772708bd503dd584e2d722"} Dec 02 10:37:01 crc kubenswrapper[4711]: I1202 10:37:01.613488 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 02 10:37:01 crc kubenswrapper[4711]: I1202 10:37:01.661064 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.661034787 podStartE2EDuration="36.661034787s" podCreationTimestamp="2025-12-02 10:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:01.647596801 +0000 UTC m=+1411.356963268" watchObservedRunningTime="2025-12-02 10:37:01.661034787 +0000 UTC m=+1411.370401254" Dec 02 10:37:02 crc kubenswrapper[4711]: I1202 10:37:02.622078 4711 generic.go:334] "Generic (PLEG): container finished" podID="b642dce9-6793-46ab-9d8a-061c21e965ce" containerID="d3d5819a0b8fffd457478e0f77071e53fec85d6907a855c4c0bc92dc3a7dda30" exitCode=0 Dec 02 10:37:02 crc kubenswrapper[4711]: I1202 10:37:02.622151 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b642dce9-6793-46ab-9d8a-061c21e965ce","Type":"ContainerDied","Data":"d3d5819a0b8fffd457478e0f77071e53fec85d6907a855c4c0bc92dc3a7dda30"} Dec 02 10:37:04 crc kubenswrapper[4711]: I1202 10:37:04.639598 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b642dce9-6793-46ab-9d8a-061c21e965ce","Type":"ContainerStarted","Data":"e868f293e5c21c30de379ff210130cab342d06bf949b81d146f95c46d63fa39f"} Dec 02 10:37:04 crc kubenswrapper[4711]: I1202 10:37:04.640346 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:37:04 crc kubenswrapper[4711]: I1202 10:37:04.689649 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.689607451 podStartE2EDuration="38.689607451s" podCreationTimestamp="2025-12-02 10:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:04.678618513 +0000 UTC m=+1414.387984960" watchObservedRunningTime="2025-12-02 10:37:04.689607451 +0000 UTC m=+1414.398973898" Dec 02 10:37:04 crc kubenswrapper[4711]: I1202 10:37:04.991131 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj"] Dec 02 10:37:04 crc kubenswrapper[4711]: E1202 10:37:04.991640 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3b50ef-f07a-43cb-8bad-1375aec48f67" containerName="init" Dec 02 10:37:04 crc kubenswrapper[4711]: I1202 10:37:04.991659 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3b50ef-f07a-43cb-8bad-1375aec48f67" containerName="init" Dec 02 10:37:04 crc kubenswrapper[4711]: E1202 10:37:04.991675 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a450e16e-93e0-4525-8514-f101cc87ae8b" containerName="init" Dec 02 10:37:04 crc kubenswrapper[4711]: I1202 10:37:04.991681 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="a450e16e-93e0-4525-8514-f101cc87ae8b" containerName="init" Dec 02 10:37:04 crc kubenswrapper[4711]: E1202 10:37:04.991697 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a450e16e-93e0-4525-8514-f101cc87ae8b" containerName="dnsmasq-dns" Dec 02 10:37:04 crc kubenswrapper[4711]: I1202 10:37:04.991703 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="a450e16e-93e0-4525-8514-f101cc87ae8b" containerName="dnsmasq-dns" Dec 02 10:37:04 crc kubenswrapper[4711]: E1202 10:37:04.991713 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3b50ef-f07a-43cb-8bad-1375aec48f67" containerName="dnsmasq-dns" Dec 02 10:37:04 crc kubenswrapper[4711]: I1202 10:37:04.991719 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3b50ef-f07a-43cb-8bad-1375aec48f67" containerName="dnsmasq-dns" Dec 02 10:37:04 crc kubenswrapper[4711]: I1202 10:37:04.991910 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3b50ef-f07a-43cb-8bad-1375aec48f67" containerName="dnsmasq-dns" Dec 02 10:37:04 crc kubenswrapper[4711]: I1202 10:37:04.991938 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="a450e16e-93e0-4525-8514-f101cc87ae8b" containerName="dnsmasq-dns" Dec 02 10:37:04 crc kubenswrapper[4711]: I1202 10:37:04.992680 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.000054 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.000301 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.000447 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.000585 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.007880 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj"] Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.171571 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn4bk\" (UniqueName: \"kubernetes.io/projected/2712309c-6014-4332-86b8-d42b5021b6c0-kube-api-access-dn4bk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.171679 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.171745 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.171789 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.273279 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn4bk\" (UniqueName: \"kubernetes.io/projected/2712309c-6014-4332-86b8-d42b5021b6c0-kube-api-access-dn4bk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.273478 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.273556 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.273618 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.280578 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.280980 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.281397 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.295573 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn4bk\" (UniqueName: \"kubernetes.io/projected/2712309c-6014-4332-86b8-d42b5021b6c0-kube-api-access-dn4bk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.328889 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:05 crc kubenswrapper[4711]: I1202 10:37:05.934457 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj"] Dec 02 10:37:05 crc kubenswrapper[4711]: W1202 10:37:05.936753 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2712309c_6014_4332_86b8_d42b5021b6c0.slice/crio-7b9287ff4f83b68bff0fe8745a8bfa06760fa0dc14e7b0ab2e7fc69d9a5e48d0 WatchSource:0}: Error finding container 7b9287ff4f83b68bff0fe8745a8bfa06760fa0dc14e7b0ab2e7fc69d9a5e48d0: Status 404 returned error can't find the container with id 7b9287ff4f83b68bff0fe8745a8bfa06760fa0dc14e7b0ab2e7fc69d9a5e48d0 Dec 02 10:37:06 crc kubenswrapper[4711]: I1202 10:37:06.662306 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" event={"ID":"2712309c-6014-4332-86b8-d42b5021b6c0","Type":"ContainerStarted","Data":"7b9287ff4f83b68bff0fe8745a8bfa06760fa0dc14e7b0ab2e7fc69d9a5e48d0"} Dec 02 10:37:15 crc kubenswrapper[4711]: I1202 10:37:15.771581 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" event={"ID":"2712309c-6014-4332-86b8-d42b5021b6c0","Type":"ContainerStarted","Data":"d1464082fa61ebe6036f21de0070fcd2f8d6d7e4135dca85711cc361e2246bbe"} Dec 02 10:37:15 crc kubenswrapper[4711]: I1202 10:37:15.791445 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" podStartSLOduration=2.953383747 podStartE2EDuration="11.791427401s" podCreationTimestamp="2025-12-02 10:37:04 +0000 UTC" firstStartedPulling="2025-12-02 10:37:05.939148601 +0000 UTC m=+1415.648515048" lastFinishedPulling="2025-12-02 10:37:14.777192215 +0000 UTC m=+1424.486558702" observedRunningTime="2025-12-02 10:37:15.790889807 +0000 UTC m=+1425.500256294" watchObservedRunningTime="2025-12-02 10:37:15.791427401 +0000 UTC m=+1425.500793878" Dec 02 10:37:15 crc kubenswrapper[4711]: I1202 10:37:15.864182 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 02 10:37:16 crc kubenswrapper[4711]: I1202 10:37:16.992198 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:37:26 crc kubenswrapper[4711]: I1202 10:37:26.913134 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" event={"ID":"2712309c-6014-4332-86b8-d42b5021b6c0","Type":"ContainerDied","Data":"d1464082fa61ebe6036f21de0070fcd2f8d6d7e4135dca85711cc361e2246bbe"} Dec 02 10:37:26 crc kubenswrapper[4711]: I1202 10:37:26.913085 4711 generic.go:334] "Generic (PLEG): container finished" podID="2712309c-6014-4332-86b8-d42b5021b6c0" containerID="d1464082fa61ebe6036f21de0070fcd2f8d6d7e4135dca85711cc361e2246bbe" exitCode=0 Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.460058 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.603031 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn4bk\" (UniqueName: \"kubernetes.io/projected/2712309c-6014-4332-86b8-d42b5021b6c0-kube-api-access-dn4bk\") pod \"2712309c-6014-4332-86b8-d42b5021b6c0\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.603128 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-repo-setup-combined-ca-bundle\") pod \"2712309c-6014-4332-86b8-d42b5021b6c0\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.603193 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-inventory\") pod \"2712309c-6014-4332-86b8-d42b5021b6c0\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.603320 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-ssh-key\") pod \"2712309c-6014-4332-86b8-d42b5021b6c0\" (UID: \"2712309c-6014-4332-86b8-d42b5021b6c0\") " Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.610642 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2712309c-6014-4332-86b8-d42b5021b6c0-kube-api-access-dn4bk" (OuterVolumeSpecName: "kube-api-access-dn4bk") pod "2712309c-6014-4332-86b8-d42b5021b6c0" (UID: "2712309c-6014-4332-86b8-d42b5021b6c0"). InnerVolumeSpecName "kube-api-access-dn4bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.610935 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2712309c-6014-4332-86b8-d42b5021b6c0" (UID: "2712309c-6014-4332-86b8-d42b5021b6c0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.654118 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2712309c-6014-4332-86b8-d42b5021b6c0" (UID: "2712309c-6014-4332-86b8-d42b5021b6c0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.670503 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-inventory" (OuterVolumeSpecName: "inventory") pod "2712309c-6014-4332-86b8-d42b5021b6c0" (UID: "2712309c-6014-4332-86b8-d42b5021b6c0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.706005 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.706049 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.706063 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn4bk\" (UniqueName: \"kubernetes.io/projected/2712309c-6014-4332-86b8-d42b5021b6c0-kube-api-access-dn4bk\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.706079 4711 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2712309c-6014-4332-86b8-d42b5021b6c0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.941343 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" event={"ID":"2712309c-6014-4332-86b8-d42b5021b6c0","Type":"ContainerDied","Data":"7b9287ff4f83b68bff0fe8745a8bfa06760fa0dc14e7b0ab2e7fc69d9a5e48d0"} Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.941419 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b9287ff4f83b68bff0fe8745a8bfa06760fa0dc14e7b0ab2e7fc69d9a5e48d0" Dec 02 10:37:28 crc kubenswrapper[4711]: I1202 10:37:28.941420 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.038763 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm"] Dec 02 10:37:29 crc kubenswrapper[4711]: E1202 10:37:29.039297 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2712309c-6014-4332-86b8-d42b5021b6c0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.039325 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="2712309c-6014-4332-86b8-d42b5021b6c0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.039536 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="2712309c-6014-4332-86b8-d42b5021b6c0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.040286 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.042741 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.043264 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.043641 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.043825 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.051518 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm"] Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.113412 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9824b88-0553-466a-9c0d-07ab1949543a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hwm\" (UID: \"c9824b88-0553-466a-9c0d-07ab1949543a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.113498 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64x84\" (UniqueName: \"kubernetes.io/projected/c9824b88-0553-466a-9c0d-07ab1949543a-kube-api-access-64x84\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hwm\" (UID: \"c9824b88-0553-466a-9c0d-07ab1949543a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.113541 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9824b88-0553-466a-9c0d-07ab1949543a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hwm\" (UID: \"c9824b88-0553-466a-9c0d-07ab1949543a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.215498 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9824b88-0553-466a-9c0d-07ab1949543a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hwm\" (UID: \"c9824b88-0553-466a-9c0d-07ab1949543a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.215644 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64x84\" (UniqueName: \"kubernetes.io/projected/c9824b88-0553-466a-9c0d-07ab1949543a-kube-api-access-64x84\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hwm\" (UID: \"c9824b88-0553-466a-9c0d-07ab1949543a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.215721 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9824b88-0553-466a-9c0d-07ab1949543a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hwm\" (UID: \"c9824b88-0553-466a-9c0d-07ab1949543a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.219271 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9824b88-0553-466a-9c0d-07ab1949543a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hwm\" (UID: \"c9824b88-0553-466a-9c0d-07ab1949543a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.219394 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9824b88-0553-466a-9c0d-07ab1949543a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hwm\" (UID: \"c9824b88-0553-466a-9c0d-07ab1949543a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.234665 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64x84\" (UniqueName: \"kubernetes.io/projected/c9824b88-0553-466a-9c0d-07ab1949543a-kube-api-access-64x84\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6hwm\" (UID: \"c9824b88-0553-466a-9c0d-07ab1949543a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.358928 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:29 crc kubenswrapper[4711]: W1202 10:37:29.962677 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9824b88_0553_466a_9c0d_07ab1949543a.slice/crio-4cf60297e61a5ea67066dbfdf1661a09ff851233e6fd3e950a4f8957b784bbd0 WatchSource:0}: Error finding container 4cf60297e61a5ea67066dbfdf1661a09ff851233e6fd3e950a4f8957b784bbd0: Status 404 returned error can't find the container with id 4cf60297e61a5ea67066dbfdf1661a09ff851233e6fd3e950a4f8957b784bbd0 Dec 02 10:37:29 crc kubenswrapper[4711]: I1202 10:37:29.969637 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm"] Dec 02 10:37:30 crc kubenswrapper[4711]: I1202 10:37:30.971722 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" event={"ID":"c9824b88-0553-466a-9c0d-07ab1949543a","Type":"ContainerStarted","Data":"762293544b4f49aaac6c0dce9cbe86b83872e925bd3a4c3642475933029c8461"} Dec 02 10:37:30 crc kubenswrapper[4711]: I1202 10:37:30.972026 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" event={"ID":"c9824b88-0553-466a-9c0d-07ab1949543a","Type":"ContainerStarted","Data":"4cf60297e61a5ea67066dbfdf1661a09ff851233e6fd3e950a4f8957b784bbd0"} Dec 02 10:37:31 crc kubenswrapper[4711]: I1202 10:37:31.004550 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" podStartSLOduration=1.45293541 podStartE2EDuration="2.004496887s" podCreationTimestamp="2025-12-02 10:37:29 +0000 UTC" firstStartedPulling="2025-12-02 10:37:29.968326324 +0000 UTC m=+1439.677692781" lastFinishedPulling="2025-12-02 10:37:30.519887771 +0000 UTC m=+1440.229254258" observedRunningTime="2025-12-02 10:37:30.995847061 +0000 UTC m=+1440.705213528" watchObservedRunningTime="2025-12-02 10:37:31.004496887 +0000 UTC m=+1440.713863344" Dec 02 10:37:34 crc kubenswrapper[4711]: I1202 10:37:34.010497 4711 generic.go:334] "Generic (PLEG): container finished" podID="c9824b88-0553-466a-9c0d-07ab1949543a" containerID="762293544b4f49aaac6c0dce9cbe86b83872e925bd3a4c3642475933029c8461" exitCode=0 Dec 02 10:37:34 crc kubenswrapper[4711]: I1202 10:37:34.010632 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" event={"ID":"c9824b88-0553-466a-9c0d-07ab1949543a","Type":"ContainerDied","Data":"762293544b4f49aaac6c0dce9cbe86b83872e925bd3a4c3642475933029c8461"} Dec 02 10:37:35 crc kubenswrapper[4711]: I1202 10:37:35.527128 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:35 crc kubenswrapper[4711]: I1202 10:37:35.662798 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64x84\" (UniqueName: \"kubernetes.io/projected/c9824b88-0553-466a-9c0d-07ab1949543a-kube-api-access-64x84\") pod \"c9824b88-0553-466a-9c0d-07ab1949543a\" (UID: \"c9824b88-0553-466a-9c0d-07ab1949543a\") " Dec 02 10:37:35 crc kubenswrapper[4711]: I1202 10:37:35.662917 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9824b88-0553-466a-9c0d-07ab1949543a-ssh-key\") pod \"c9824b88-0553-466a-9c0d-07ab1949543a\" (UID: \"c9824b88-0553-466a-9c0d-07ab1949543a\") " Dec 02 10:37:35 crc kubenswrapper[4711]: I1202 10:37:35.663142 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9824b88-0553-466a-9c0d-07ab1949543a-inventory\") pod \"c9824b88-0553-466a-9c0d-07ab1949543a\" (UID: \"c9824b88-0553-466a-9c0d-07ab1949543a\") " Dec 02 10:37:35 crc kubenswrapper[4711]: I1202 10:37:35.682286 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9824b88-0553-466a-9c0d-07ab1949543a-kube-api-access-64x84" (OuterVolumeSpecName: "kube-api-access-64x84") pod "c9824b88-0553-466a-9c0d-07ab1949543a" (UID: "c9824b88-0553-466a-9c0d-07ab1949543a"). InnerVolumeSpecName "kube-api-access-64x84". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:35 crc kubenswrapper[4711]: I1202 10:37:35.690679 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9824b88-0553-466a-9c0d-07ab1949543a-inventory" (OuterVolumeSpecName: "inventory") pod "c9824b88-0553-466a-9c0d-07ab1949543a" (UID: "c9824b88-0553-466a-9c0d-07ab1949543a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:35 crc kubenswrapper[4711]: I1202 10:37:35.696776 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9824b88-0553-466a-9c0d-07ab1949543a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c9824b88-0553-466a-9c0d-07ab1949543a" (UID: "c9824b88-0553-466a-9c0d-07ab1949543a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:35 crc kubenswrapper[4711]: I1202 10:37:35.766194 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9824b88-0553-466a-9c0d-07ab1949543a-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:35 crc kubenswrapper[4711]: I1202 10:37:35.766245 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64x84\" (UniqueName: \"kubernetes.io/projected/c9824b88-0553-466a-9c0d-07ab1949543a-kube-api-access-64x84\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:35 crc kubenswrapper[4711]: I1202 10:37:35.766265 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9824b88-0553-466a-9c0d-07ab1949543a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.034009 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" event={"ID":"c9824b88-0553-466a-9c0d-07ab1949543a","Type":"ContainerDied","Data":"4cf60297e61a5ea67066dbfdf1661a09ff851233e6fd3e950a4f8957b784bbd0"} Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.034066 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf60297e61a5ea67066dbfdf1661a09ff851233e6fd3e950a4f8957b784bbd0" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.034064 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6hwm" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.150960 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh"] Dec 02 10:37:36 crc kubenswrapper[4711]: E1202 10:37:36.151676 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9824b88-0553-466a-9c0d-07ab1949543a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.151691 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9824b88-0553-466a-9c0d-07ab1949543a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.151871 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9824b88-0553-466a-9c0d-07ab1949543a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.152456 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.154799 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.155173 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.159148 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.162148 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.165335 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh"] Dec 02 10:37:36 crc kubenswrapper[4711]: E1202 10:37:36.222556 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9824b88_0553_466a_9c0d_07ab1949543a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9824b88_0553_466a_9c0d_07ab1949543a.slice/crio-4cf60297e61a5ea67066dbfdf1661a09ff851233e6fd3e950a4f8957b784bbd0\": RecentStats: unable to find data in memory cache]" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.276440 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.276602 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmgwq\" (UniqueName: \"kubernetes.io/projected/4a833e50-6d25-4593-b413-ceb01d516010-kube-api-access-jmgwq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.276711 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.276754 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.378802 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.378881 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.378995 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmgwq\" (UniqueName: \"kubernetes.io/projected/4a833e50-6d25-4593-b413-ceb01d516010-kube-api-access-jmgwq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.379097 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.385768 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.386601 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.396645 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.409440 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmgwq\" (UniqueName: \"kubernetes.io/projected/4a833e50-6d25-4593-b413-ceb01d516010-kube-api-access-jmgwq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:36 crc kubenswrapper[4711]: I1202 10:37:36.484479 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:37:37 crc kubenswrapper[4711]: W1202 10:37:37.093318 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a833e50_6d25_4593_b413_ceb01d516010.slice/crio-7918e684578add833f5679b53dfe8f0e0e7d91a331da30db4c3787f3b1132d0c WatchSource:0}: Error finding container 7918e684578add833f5679b53dfe8f0e0e7d91a331da30db4c3787f3b1132d0c: Status 404 returned error can't find the container with id 7918e684578add833f5679b53dfe8f0e0e7d91a331da30db4c3787f3b1132d0c Dec 02 10:37:37 crc kubenswrapper[4711]: I1202 10:37:37.114173 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh"] Dec 02 10:37:38 crc kubenswrapper[4711]: I1202 10:37:38.056808 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" event={"ID":"4a833e50-6d25-4593-b413-ceb01d516010","Type":"ContainerStarted","Data":"7918e684578add833f5679b53dfe8f0e0e7d91a331da30db4c3787f3b1132d0c"} Dec 02 10:37:39 crc kubenswrapper[4711]: I1202 10:37:39.072847 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" event={"ID":"4a833e50-6d25-4593-b413-ceb01d516010","Type":"ContainerStarted","Data":"41df927c493a46412193e5db66ee97382b83bbbb1dd26fa5ea770170ac4d4953"} Dec 02 10:37:39 crc kubenswrapper[4711]: I1202 10:37:39.112217 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" podStartSLOduration=2.27132368 podStartE2EDuration="3.112162439s" podCreationTimestamp="2025-12-02 10:37:36 +0000 UTC" firstStartedPulling="2025-12-02 10:37:37.122900923 +0000 UTC m=+1446.832267370" lastFinishedPulling="2025-12-02 10:37:37.963739672 +0000 UTC m=+1447.673106129" observedRunningTime="2025-12-02 10:37:39.098401525 +0000 UTC m=+1448.807768012" watchObservedRunningTime="2025-12-02 10:37:39.112162439 +0000 UTC m=+1448.821528926" Dec 02 10:38:22 crc kubenswrapper[4711]: I1202 10:38:22.607559 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:38:22 crc kubenswrapper[4711]: I1202 10:38:22.608079 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:38:33 crc kubenswrapper[4711]: I1202 10:38:33.622979 4711 scope.go:117] "RemoveContainer" containerID="e632f4ed340d82d415adfab0ed398b9b08cb10d0068e010bc2cdc241638edf3d" Dec 02 10:38:33 crc kubenswrapper[4711]: I1202 10:38:33.658751 4711 scope.go:117] "RemoveContainer" containerID="0ad12b4324fa1097c4b553e771c45bfcd6df7e3f89a17a2ecb7a4c0e8dc33f8a" Dec 02 10:38:33 crc kubenswrapper[4711]: I1202 10:38:33.771194 4711 scope.go:117] "RemoveContainer" containerID="f25da8488085ba3ab744b7fb07dfac8c755f3b19c1b1982493783b6a8fc5f856" Dec 02 10:38:52 crc kubenswrapper[4711]: I1202 10:38:52.586085 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:38:52 crc kubenswrapper[4711]: I1202 10:38:52.586636 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:39:22 crc kubenswrapper[4711]: I1202 10:39:22.586449 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:39:22 crc kubenswrapper[4711]: I1202 10:39:22.587661 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:39:22 crc kubenswrapper[4711]: I1202 10:39:22.587884 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:39:22 crc kubenswrapper[4711]: I1202 10:39:22.589732 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:39:22 crc kubenswrapper[4711]: I1202 10:39:22.589900 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" gracePeriod=600 Dec 02 10:39:22 crc kubenswrapper[4711]: E1202 10:39:22.728369 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:39:23 crc kubenswrapper[4711]: I1202 10:39:23.252285 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c"} Dec 02 10:39:23 crc kubenswrapper[4711]: I1202 10:39:23.252255 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" exitCode=0 Dec 02 10:39:23 crc kubenswrapper[4711]: I1202 10:39:23.252501 4711 scope.go:117] "RemoveContainer" containerID="aa264d5b0f373424df2b67d6e79de2f6c80da037caa0a7a377debbcb2ad5e375" Dec 02 10:39:23 crc kubenswrapper[4711]: I1202 10:39:23.254619 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:39:23 crc kubenswrapper[4711]: E1202 10:39:23.256886 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:39:33 crc kubenswrapper[4711]: I1202 10:39:33.873433 4711 scope.go:117] "RemoveContainer" containerID="50ad3fcc3190aeafd15a08e8175caac18d43d29acc090664a74094b03d61e267" Dec 02 10:39:39 crc kubenswrapper[4711]: I1202 10:39:39.079158 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:39:39 crc kubenswrapper[4711]: E1202 10:39:39.080066 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:39:40 crc kubenswrapper[4711]: I1202 10:39:40.668727 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6zlb2"] Dec 02 10:39:40 crc kubenswrapper[4711]: I1202 10:39:40.671796 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:40 crc kubenswrapper[4711]: I1202 10:39:40.721892 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zlb2"] Dec 02 10:39:40 crc kubenswrapper[4711]: I1202 10:39:40.829304 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5834d56c-b600-407b-96e2-9291741abd7f-utilities\") pod \"redhat-marketplace-6zlb2\" (UID: \"5834d56c-b600-407b-96e2-9291741abd7f\") " pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:40 crc kubenswrapper[4711]: I1202 10:39:40.829393 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7wwj\" (UniqueName: \"kubernetes.io/projected/5834d56c-b600-407b-96e2-9291741abd7f-kube-api-access-n7wwj\") pod \"redhat-marketplace-6zlb2\" (UID: \"5834d56c-b600-407b-96e2-9291741abd7f\") " pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:40 crc kubenswrapper[4711]: I1202 10:39:40.829457 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5834d56c-b600-407b-96e2-9291741abd7f-catalog-content\") pod \"redhat-marketplace-6zlb2\" (UID: \"5834d56c-b600-407b-96e2-9291741abd7f\") " pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:40 crc kubenswrapper[4711]: I1202 10:39:40.931162 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7wwj\" (UniqueName: \"kubernetes.io/projected/5834d56c-b600-407b-96e2-9291741abd7f-kube-api-access-n7wwj\") pod \"redhat-marketplace-6zlb2\" (UID: \"5834d56c-b600-407b-96e2-9291741abd7f\") " pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:40 crc kubenswrapper[4711]: I1202 10:39:40.931240 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5834d56c-b600-407b-96e2-9291741abd7f-catalog-content\") pod \"redhat-marketplace-6zlb2\" (UID: \"5834d56c-b600-407b-96e2-9291741abd7f\") " pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:40 crc kubenswrapper[4711]: I1202 10:39:40.931405 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5834d56c-b600-407b-96e2-9291741abd7f-utilities\") pod \"redhat-marketplace-6zlb2\" (UID: \"5834d56c-b600-407b-96e2-9291741abd7f\") " pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:40 crc kubenswrapper[4711]: I1202 10:39:40.932072 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5834d56c-b600-407b-96e2-9291741abd7f-catalog-content\") pod \"redhat-marketplace-6zlb2\" (UID: \"5834d56c-b600-407b-96e2-9291741abd7f\") " pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:40 crc kubenswrapper[4711]: I1202 10:39:40.932097 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5834d56c-b600-407b-96e2-9291741abd7f-utilities\") pod \"redhat-marketplace-6zlb2\" (UID: \"5834d56c-b600-407b-96e2-9291741abd7f\") " pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:40 crc kubenswrapper[4711]: I1202 10:39:40.954657 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7wwj\" (UniqueName: \"kubernetes.io/projected/5834d56c-b600-407b-96e2-9291741abd7f-kube-api-access-n7wwj\") pod \"redhat-marketplace-6zlb2\" (UID: \"5834d56c-b600-407b-96e2-9291741abd7f\") " pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:41 crc kubenswrapper[4711]: I1202 10:39:41.014544 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:41 crc kubenswrapper[4711]: I1202 10:39:41.503784 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zlb2"] Dec 02 10:39:42 crc kubenswrapper[4711]: I1202 10:39:42.479247 4711 generic.go:334] "Generic (PLEG): container finished" podID="5834d56c-b600-407b-96e2-9291741abd7f" containerID="ca6e4282ab5feef719425955fa726f134ad936594dd064efe1d4eb1c7bfe0d49" exitCode=0 Dec 02 10:39:42 crc kubenswrapper[4711]: I1202 10:39:42.479353 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zlb2" event={"ID":"5834d56c-b600-407b-96e2-9291741abd7f","Type":"ContainerDied","Data":"ca6e4282ab5feef719425955fa726f134ad936594dd064efe1d4eb1c7bfe0d49"} Dec 02 10:39:42 crc kubenswrapper[4711]: I1202 10:39:42.479657 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zlb2" event={"ID":"5834d56c-b600-407b-96e2-9291741abd7f","Type":"ContainerStarted","Data":"35506b279b771e558a87a01d8afc016fc62321375e9c03a3724ba53a40592106"} Dec 02 10:39:43 crc kubenswrapper[4711]: I1202 10:39:43.491174 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zlb2" event={"ID":"5834d56c-b600-407b-96e2-9291741abd7f","Type":"ContainerStarted","Data":"a8f335981b77c912f375da201fd126b057f1f45d68975dff89ffb3c1e21ee127"} Dec 02 10:39:44 crc kubenswrapper[4711]: I1202 10:39:44.501935 4711 generic.go:334] "Generic (PLEG): container finished" podID="5834d56c-b600-407b-96e2-9291741abd7f" containerID="a8f335981b77c912f375da201fd126b057f1f45d68975dff89ffb3c1e21ee127" exitCode=0 Dec 02 10:39:44 crc kubenswrapper[4711]: I1202 10:39:44.502001 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zlb2" event={"ID":"5834d56c-b600-407b-96e2-9291741abd7f","Type":"ContainerDied","Data":"a8f335981b77c912f375da201fd126b057f1f45d68975dff89ffb3c1e21ee127"} Dec 02 10:39:45 crc kubenswrapper[4711]: I1202 10:39:45.515407 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zlb2" event={"ID":"5834d56c-b600-407b-96e2-9291741abd7f","Type":"ContainerStarted","Data":"8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193"} Dec 02 10:39:45 crc kubenswrapper[4711]: I1202 10:39:45.535688 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6zlb2" podStartSLOduration=2.99293338 podStartE2EDuration="5.535640876s" podCreationTimestamp="2025-12-02 10:39:40 +0000 UTC" firstStartedPulling="2025-12-02 10:39:42.483590671 +0000 UTC m=+1572.192957148" lastFinishedPulling="2025-12-02 10:39:45.026298197 +0000 UTC m=+1574.735664644" observedRunningTime="2025-12-02 10:39:45.533063886 +0000 UTC m=+1575.242430353" watchObservedRunningTime="2025-12-02 10:39:45.535640876 +0000 UTC m=+1575.245007323" Dec 02 10:39:51 crc kubenswrapper[4711]: I1202 10:39:51.015041 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:51 crc kubenswrapper[4711]: I1202 10:39:51.015514 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:51 crc kubenswrapper[4711]: I1202 10:39:51.090338 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:39:51 crc kubenswrapper[4711]: E1202 10:39:51.090640 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:39:51 crc kubenswrapper[4711]: I1202 10:39:51.095058 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:51 crc kubenswrapper[4711]: I1202 10:39:51.643207 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:51 crc kubenswrapper[4711]: I1202 10:39:51.698889 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zlb2"] Dec 02 10:39:53 crc kubenswrapper[4711]: I1202 10:39:53.610164 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6zlb2" podUID="5834d56c-b600-407b-96e2-9291741abd7f" containerName="registry-server" containerID="cri-o://8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193" gracePeriod=2 Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.191458 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.235667 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5834d56c-b600-407b-96e2-9291741abd7f-utilities\") pod \"5834d56c-b600-407b-96e2-9291741abd7f\" (UID: \"5834d56c-b600-407b-96e2-9291741abd7f\") " Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.235754 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5834d56c-b600-407b-96e2-9291741abd7f-catalog-content\") pod \"5834d56c-b600-407b-96e2-9291741abd7f\" (UID: \"5834d56c-b600-407b-96e2-9291741abd7f\") " Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.235810 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7wwj\" (UniqueName: \"kubernetes.io/projected/5834d56c-b600-407b-96e2-9291741abd7f-kube-api-access-n7wwj\") pod \"5834d56c-b600-407b-96e2-9291741abd7f\" (UID: \"5834d56c-b600-407b-96e2-9291741abd7f\") " Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.238193 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5834d56c-b600-407b-96e2-9291741abd7f-utilities" (OuterVolumeSpecName: "utilities") pod "5834d56c-b600-407b-96e2-9291741abd7f" (UID: "5834d56c-b600-407b-96e2-9291741abd7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.247337 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5834d56c-b600-407b-96e2-9291741abd7f-kube-api-access-n7wwj" (OuterVolumeSpecName: "kube-api-access-n7wwj") pod "5834d56c-b600-407b-96e2-9291741abd7f" (UID: "5834d56c-b600-407b-96e2-9291741abd7f"). InnerVolumeSpecName "kube-api-access-n7wwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.256933 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5834d56c-b600-407b-96e2-9291741abd7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5834d56c-b600-407b-96e2-9291741abd7f" (UID: "5834d56c-b600-407b-96e2-9291741abd7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.338187 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5834d56c-b600-407b-96e2-9291741abd7f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.338222 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5834d56c-b600-407b-96e2-9291741abd7f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.338236 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7wwj\" (UniqueName: \"kubernetes.io/projected/5834d56c-b600-407b-96e2-9291741abd7f-kube-api-access-n7wwj\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.624582 4711 generic.go:334] "Generic (PLEG): container finished" podID="5834d56c-b600-407b-96e2-9291741abd7f" containerID="8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193" exitCode=0 Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.624670 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6zlb2" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.624665 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zlb2" event={"ID":"5834d56c-b600-407b-96e2-9291741abd7f","Type":"ContainerDied","Data":"8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193"} Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.624779 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zlb2" event={"ID":"5834d56c-b600-407b-96e2-9291741abd7f","Type":"ContainerDied","Data":"35506b279b771e558a87a01d8afc016fc62321375e9c03a3724ba53a40592106"} Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.624808 4711 scope.go:117] "RemoveContainer" containerID="8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.651130 4711 scope.go:117] "RemoveContainer" containerID="a8f335981b77c912f375da201fd126b057f1f45d68975dff89ffb3c1e21ee127" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.672117 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zlb2"] Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.678703 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zlb2"] Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.710413 4711 scope.go:117] "RemoveContainer" containerID="ca6e4282ab5feef719425955fa726f134ad936594dd064efe1d4eb1c7bfe0d49" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.741249 4711 scope.go:117] "RemoveContainer" containerID="8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193" Dec 02 10:39:54 crc kubenswrapper[4711]: E1202 10:39:54.741786 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193\": container with ID starting with 8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193 not found: ID does not exist" containerID="8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.741828 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193"} err="failed to get container status \"8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193\": rpc error: code = NotFound desc = could not find container \"8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193\": container with ID starting with 8df9d26ae1721755ba22176d45ca0178354bfdb0e38ac19c21da3f2ded8f9193 not found: ID does not exist" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.741850 4711 scope.go:117] "RemoveContainer" containerID="a8f335981b77c912f375da201fd126b057f1f45d68975dff89ffb3c1e21ee127" Dec 02 10:39:54 crc kubenswrapper[4711]: E1202 10:39:54.742532 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f335981b77c912f375da201fd126b057f1f45d68975dff89ffb3c1e21ee127\": container with ID starting with a8f335981b77c912f375da201fd126b057f1f45d68975dff89ffb3c1e21ee127 not found: ID does not exist" containerID="a8f335981b77c912f375da201fd126b057f1f45d68975dff89ffb3c1e21ee127" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.742586 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f335981b77c912f375da201fd126b057f1f45d68975dff89ffb3c1e21ee127"} err="failed to get container status \"a8f335981b77c912f375da201fd126b057f1f45d68975dff89ffb3c1e21ee127\": rpc error: code = NotFound desc = could not find container \"a8f335981b77c912f375da201fd126b057f1f45d68975dff89ffb3c1e21ee127\": container with ID starting with a8f335981b77c912f375da201fd126b057f1f45d68975dff89ffb3c1e21ee127 not found: ID does not exist" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.742624 4711 scope.go:117] "RemoveContainer" containerID="ca6e4282ab5feef719425955fa726f134ad936594dd064efe1d4eb1c7bfe0d49" Dec 02 10:39:54 crc kubenswrapper[4711]: E1202 10:39:54.742992 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca6e4282ab5feef719425955fa726f134ad936594dd064efe1d4eb1c7bfe0d49\": container with ID starting with ca6e4282ab5feef719425955fa726f134ad936594dd064efe1d4eb1c7bfe0d49 not found: ID does not exist" containerID="ca6e4282ab5feef719425955fa726f134ad936594dd064efe1d4eb1c7bfe0d49" Dec 02 10:39:54 crc kubenswrapper[4711]: I1202 10:39:54.743017 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca6e4282ab5feef719425955fa726f134ad936594dd064efe1d4eb1c7bfe0d49"} err="failed to get container status \"ca6e4282ab5feef719425955fa726f134ad936594dd064efe1d4eb1c7bfe0d49\": rpc error: code = NotFound desc = could not find container \"ca6e4282ab5feef719425955fa726f134ad936594dd064efe1d4eb1c7bfe0d49\": container with ID starting with ca6e4282ab5feef719425955fa726f134ad936594dd064efe1d4eb1c7bfe0d49 not found: ID does not exist" Dec 02 10:39:55 crc kubenswrapper[4711]: I1202 10:39:55.097239 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5834d56c-b600-407b-96e2-9291741abd7f" path="/var/lib/kubelet/pods/5834d56c-b600-407b-96e2-9291741abd7f/volumes" Dec 02 10:40:03 crc kubenswrapper[4711]: I1202 10:40:03.078836 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:40:03 crc kubenswrapper[4711]: E1202 10:40:03.079837 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.069266 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8txm5"] Dec 02 10:40:05 crc kubenswrapper[4711]: E1202 10:40:05.070013 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5834d56c-b600-407b-96e2-9291741abd7f" containerName="extract-utilities" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.070045 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="5834d56c-b600-407b-96e2-9291741abd7f" containerName="extract-utilities" Dec 02 10:40:05 crc kubenswrapper[4711]: E1202 10:40:05.070066 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5834d56c-b600-407b-96e2-9291741abd7f" containerName="extract-content" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.070072 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="5834d56c-b600-407b-96e2-9291741abd7f" containerName="extract-content" Dec 02 10:40:05 crc kubenswrapper[4711]: E1202 10:40:05.070103 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5834d56c-b600-407b-96e2-9291741abd7f" containerName="registry-server" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.070111 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="5834d56c-b600-407b-96e2-9291741abd7f" containerName="registry-server" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.070319 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="5834d56c-b600-407b-96e2-9291741abd7f" containerName="registry-server" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.071631 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.095392 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8txm5"] Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.167875 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-catalog-content\") pod \"certified-operators-8txm5\" (UID: \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\") " pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.167952 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-utilities\") pod \"certified-operators-8txm5\" (UID: \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\") " pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.168055 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9dt9\" (UniqueName: \"kubernetes.io/projected/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-kube-api-access-d9dt9\") pod \"certified-operators-8txm5\" (UID: \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\") " pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.269229 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-catalog-content\") pod \"certified-operators-8txm5\" (UID: \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\") " pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.269283 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-utilities\") pod \"certified-operators-8txm5\" (UID: \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\") " pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.269351 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9dt9\" (UniqueName: \"kubernetes.io/projected/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-kube-api-access-d9dt9\") pod \"certified-operators-8txm5\" (UID: \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\") " pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.270110 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-catalog-content\") pod \"certified-operators-8txm5\" (UID: \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\") " pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.270331 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-utilities\") pod \"certified-operators-8txm5\" (UID: \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\") " pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.293279 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9dt9\" (UniqueName: \"kubernetes.io/projected/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-kube-api-access-d9dt9\") pod \"certified-operators-8txm5\" (UID: \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\") " pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.396056 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:05 crc kubenswrapper[4711]: I1202 10:40:05.950368 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8txm5"] Dec 02 10:40:06 crc kubenswrapper[4711]: I1202 10:40:06.784371 4711 generic.go:334] "Generic (PLEG): container finished" podID="c5f0d36b-3cb9-4b41-821c-9f4ca5141386" containerID="a634131f02598b5c18144af9ff7eae53b7792a0a32375e2167ca2a8994f4d7a0" exitCode=0 Dec 02 10:40:06 crc kubenswrapper[4711]: I1202 10:40:06.784474 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8txm5" event={"ID":"c5f0d36b-3cb9-4b41-821c-9f4ca5141386","Type":"ContainerDied","Data":"a634131f02598b5c18144af9ff7eae53b7792a0a32375e2167ca2a8994f4d7a0"} Dec 02 10:40:06 crc kubenswrapper[4711]: I1202 10:40:06.784651 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8txm5" event={"ID":"c5f0d36b-3cb9-4b41-821c-9f4ca5141386","Type":"ContainerStarted","Data":"edba4d28e9fd639e85fbe33c91c1cb18763f161130939bbee9274e1a0ef55efb"} Dec 02 10:40:06 crc kubenswrapper[4711]: I1202 10:40:06.790464 4711 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:40:08 crc kubenswrapper[4711]: I1202 10:40:08.807399 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8txm5" event={"ID":"c5f0d36b-3cb9-4b41-821c-9f4ca5141386","Type":"ContainerDied","Data":"001a91a39dbd3445e344f004c604a6f5e287028b7c0ac193088069f962b933e1"} Dec 02 10:40:08 crc kubenswrapper[4711]: I1202 10:40:08.807327 4711 generic.go:334] "Generic (PLEG): container finished" podID="c5f0d36b-3cb9-4b41-821c-9f4ca5141386" containerID="001a91a39dbd3445e344f004c604a6f5e287028b7c0ac193088069f962b933e1" exitCode=0 Dec 02 10:40:09 crc kubenswrapper[4711]: I1202 10:40:09.820037 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8txm5" event={"ID":"c5f0d36b-3cb9-4b41-821c-9f4ca5141386","Type":"ContainerStarted","Data":"0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac"} Dec 02 10:40:09 crc kubenswrapper[4711]: I1202 10:40:09.856230 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8txm5" podStartSLOduration=2.24380709 podStartE2EDuration="4.856202082s" podCreationTimestamp="2025-12-02 10:40:05 +0000 UTC" firstStartedPulling="2025-12-02 10:40:06.789845071 +0000 UTC m=+1596.499211558" lastFinishedPulling="2025-12-02 10:40:09.402240053 +0000 UTC m=+1599.111606550" observedRunningTime="2025-12-02 10:40:09.838153263 +0000 UTC m=+1599.547519720" watchObservedRunningTime="2025-12-02 10:40:09.856202082 +0000 UTC m=+1599.565568569" Dec 02 10:40:15 crc kubenswrapper[4711]: I1202 10:40:15.397226 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:15 crc kubenswrapper[4711]: I1202 10:40:15.397838 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:15 crc kubenswrapper[4711]: I1202 10:40:15.451658 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:15 crc kubenswrapper[4711]: I1202 10:40:15.944424 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:15 crc kubenswrapper[4711]: I1202 10:40:15.995736 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8txm5"] Dec 02 10:40:17 crc kubenswrapper[4711]: I1202 10:40:17.921247 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8txm5" podUID="c5f0d36b-3cb9-4b41-821c-9f4ca5141386" containerName="registry-server" containerID="cri-o://0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac" gracePeriod=2 Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.078693 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:40:18 crc kubenswrapper[4711]: E1202 10:40:18.079346 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.401472 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.547277 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-utilities\") pod \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\" (UID: \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\") " Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.547598 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-catalog-content\") pod \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\" (UID: \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\") " Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.547675 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9dt9\" (UniqueName: \"kubernetes.io/projected/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-kube-api-access-d9dt9\") pod \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\" (UID: \"c5f0d36b-3cb9-4b41-821c-9f4ca5141386\") " Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.548498 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-utilities" (OuterVolumeSpecName: "utilities") pod "c5f0d36b-3cb9-4b41-821c-9f4ca5141386" (UID: "c5f0d36b-3cb9-4b41-821c-9f4ca5141386"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.552927 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-kube-api-access-d9dt9" (OuterVolumeSpecName: "kube-api-access-d9dt9") pod "c5f0d36b-3cb9-4b41-821c-9f4ca5141386" (UID: "c5f0d36b-3cb9-4b41-821c-9f4ca5141386"). InnerVolumeSpecName "kube-api-access-d9dt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.597102 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5f0d36b-3cb9-4b41-821c-9f4ca5141386" (UID: "c5f0d36b-3cb9-4b41-821c-9f4ca5141386"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.650575 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.650675 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9dt9\" (UniqueName: \"kubernetes.io/projected/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-kube-api-access-d9dt9\") on node \"crc\" DevicePath \"\"" Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.650688 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f0d36b-3cb9-4b41-821c-9f4ca5141386-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.954894 4711 generic.go:334] "Generic (PLEG): container finished" podID="c5f0d36b-3cb9-4b41-821c-9f4ca5141386" containerID="0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac" exitCode=0 Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.955049 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8txm5" event={"ID":"c5f0d36b-3cb9-4b41-821c-9f4ca5141386","Type":"ContainerDied","Data":"0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac"} Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.955109 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8txm5" event={"ID":"c5f0d36b-3cb9-4b41-821c-9f4ca5141386","Type":"ContainerDied","Data":"edba4d28e9fd639e85fbe33c91c1cb18763f161130939bbee9274e1a0ef55efb"} Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.955152 4711 scope.go:117] "RemoveContainer" containerID="0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac" Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.955568 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8txm5" Dec 02 10:40:18 crc kubenswrapper[4711]: I1202 10:40:18.992314 4711 scope.go:117] "RemoveContainer" containerID="001a91a39dbd3445e344f004c604a6f5e287028b7c0ac193088069f962b933e1" Dec 02 10:40:19 crc kubenswrapper[4711]: I1202 10:40:19.002636 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8txm5"] Dec 02 10:40:19 crc kubenswrapper[4711]: I1202 10:40:19.010662 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8txm5"] Dec 02 10:40:19 crc kubenswrapper[4711]: I1202 10:40:19.028238 4711 scope.go:117] "RemoveContainer" containerID="a634131f02598b5c18144af9ff7eae53b7792a0a32375e2167ca2a8994f4d7a0" Dec 02 10:40:19 crc kubenswrapper[4711]: I1202 10:40:19.071450 4711 scope.go:117] "RemoveContainer" containerID="0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac" Dec 02 10:40:19 crc kubenswrapper[4711]: E1202 10:40:19.071922 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac\": container with ID starting with 0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac not found: ID does not exist" containerID="0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac" Dec 02 10:40:19 crc kubenswrapper[4711]: I1202 10:40:19.071994 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac"} err="failed to get container status \"0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac\": rpc error: code = NotFound desc = could not find container \"0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac\": container with ID starting with 0a394ef3c2f09840d827b8430049fc0c3731d3d284299fbd0582babb1b450eac not found: ID does not exist" Dec 02 10:40:19 crc kubenswrapper[4711]: I1202 10:40:19.072030 4711 scope.go:117] "RemoveContainer" containerID="001a91a39dbd3445e344f004c604a6f5e287028b7c0ac193088069f962b933e1" Dec 02 10:40:19 crc kubenswrapper[4711]: E1202 10:40:19.072499 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"001a91a39dbd3445e344f004c604a6f5e287028b7c0ac193088069f962b933e1\": container with ID starting with 001a91a39dbd3445e344f004c604a6f5e287028b7c0ac193088069f962b933e1 not found: ID does not exist" containerID="001a91a39dbd3445e344f004c604a6f5e287028b7c0ac193088069f962b933e1" Dec 02 10:40:19 crc kubenswrapper[4711]: I1202 10:40:19.072546 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"001a91a39dbd3445e344f004c604a6f5e287028b7c0ac193088069f962b933e1"} err="failed to get container status \"001a91a39dbd3445e344f004c604a6f5e287028b7c0ac193088069f962b933e1\": rpc error: code = NotFound desc = could not find container \"001a91a39dbd3445e344f004c604a6f5e287028b7c0ac193088069f962b933e1\": container with ID starting with 001a91a39dbd3445e344f004c604a6f5e287028b7c0ac193088069f962b933e1 not found: ID does not exist" Dec 02 10:40:19 crc kubenswrapper[4711]: I1202 10:40:19.072577 4711 scope.go:117] "RemoveContainer" containerID="a634131f02598b5c18144af9ff7eae53b7792a0a32375e2167ca2a8994f4d7a0" Dec 02 10:40:19 crc kubenswrapper[4711]: E1202 10:40:19.072856 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a634131f02598b5c18144af9ff7eae53b7792a0a32375e2167ca2a8994f4d7a0\": container with ID starting with a634131f02598b5c18144af9ff7eae53b7792a0a32375e2167ca2a8994f4d7a0 not found: ID does not exist" containerID="a634131f02598b5c18144af9ff7eae53b7792a0a32375e2167ca2a8994f4d7a0" Dec 02 10:40:19 crc kubenswrapper[4711]: I1202 10:40:19.072899 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a634131f02598b5c18144af9ff7eae53b7792a0a32375e2167ca2a8994f4d7a0"} err="failed to get container status \"a634131f02598b5c18144af9ff7eae53b7792a0a32375e2167ca2a8994f4d7a0\": rpc error: code = NotFound desc = could not find container \"a634131f02598b5c18144af9ff7eae53b7792a0a32375e2167ca2a8994f4d7a0\": container with ID starting with a634131f02598b5c18144af9ff7eae53b7792a0a32375e2167ca2a8994f4d7a0 not found: ID does not exist" Dec 02 10:40:19 crc kubenswrapper[4711]: I1202 10:40:19.090534 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f0d36b-3cb9-4b41-821c-9f4ca5141386" path="/var/lib/kubelet/pods/c5f0d36b-3cb9-4b41-821c-9f4ca5141386/volumes" Dec 02 10:40:31 crc kubenswrapper[4711]: I1202 10:40:31.088993 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:40:31 crc kubenswrapper[4711]: E1202 10:40:31.090165 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:40:33 crc kubenswrapper[4711]: I1202 10:40:33.921502 4711 scope.go:117] "RemoveContainer" containerID="05a43d1badc2458a044e8ee9b36da175b623d18c76ddc433e87c3aea77edade1" Dec 02 10:40:46 crc kubenswrapper[4711]: I1202 10:40:46.078464 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:40:46 crc kubenswrapper[4711]: E1202 10:40:46.079376 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:40:58 crc kubenswrapper[4711]: I1202 10:40:58.374402 4711 generic.go:334] "Generic (PLEG): container finished" podID="4a833e50-6d25-4593-b413-ceb01d516010" containerID="41df927c493a46412193e5db66ee97382b83bbbb1dd26fa5ea770170ac4d4953" exitCode=0 Dec 02 10:40:58 crc kubenswrapper[4711]: I1202 10:40:58.374514 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" event={"ID":"4a833e50-6d25-4593-b413-ceb01d516010","Type":"ContainerDied","Data":"41df927c493a46412193e5db66ee97382b83bbbb1dd26fa5ea770170ac4d4953"} Dec 02 10:40:59 crc kubenswrapper[4711]: I1202 10:40:59.078204 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:40:59 crc kubenswrapper[4711]: E1202 10:40:59.078540 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:40:59 crc kubenswrapper[4711]: I1202 10:40:59.777015 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:40:59 crc kubenswrapper[4711]: I1202 10:40:59.908508 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-bootstrap-combined-ca-bundle\") pod \"4a833e50-6d25-4593-b413-ceb01d516010\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " Dec 02 10:40:59 crc kubenswrapper[4711]: I1202 10:40:59.908698 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-ssh-key\") pod \"4a833e50-6d25-4593-b413-ceb01d516010\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " Dec 02 10:40:59 crc kubenswrapper[4711]: I1202 10:40:59.908750 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-inventory\") pod \"4a833e50-6d25-4593-b413-ceb01d516010\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " Dec 02 10:40:59 crc kubenswrapper[4711]: I1202 10:40:59.908793 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmgwq\" (UniqueName: \"kubernetes.io/projected/4a833e50-6d25-4593-b413-ceb01d516010-kube-api-access-jmgwq\") pod \"4a833e50-6d25-4593-b413-ceb01d516010\" (UID: \"4a833e50-6d25-4593-b413-ceb01d516010\") " Dec 02 10:40:59 crc kubenswrapper[4711]: I1202 10:40:59.917883 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4a833e50-6d25-4593-b413-ceb01d516010" (UID: "4a833e50-6d25-4593-b413-ceb01d516010"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:40:59 crc kubenswrapper[4711]: I1202 10:40:59.919269 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a833e50-6d25-4593-b413-ceb01d516010-kube-api-access-jmgwq" (OuterVolumeSpecName: "kube-api-access-jmgwq") pod "4a833e50-6d25-4593-b413-ceb01d516010" (UID: "4a833e50-6d25-4593-b413-ceb01d516010"). InnerVolumeSpecName "kube-api-access-jmgwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:40:59 crc kubenswrapper[4711]: I1202 10:40:59.948055 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-inventory" (OuterVolumeSpecName: "inventory") pod "4a833e50-6d25-4593-b413-ceb01d516010" (UID: "4a833e50-6d25-4593-b413-ceb01d516010"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:40:59 crc kubenswrapper[4711]: I1202 10:40:59.958042 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4a833e50-6d25-4593-b413-ceb01d516010" (UID: "4a833e50-6d25-4593-b413-ceb01d516010"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.014525 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmgwq\" (UniqueName: \"kubernetes.io/projected/4a833e50-6d25-4593-b413-ceb01d516010-kube-api-access-jmgwq\") on node \"crc\" DevicePath \"\"" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.014559 4711 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.014569 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.014579 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a833e50-6d25-4593-b413-ceb01d516010-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.394687 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" event={"ID":"4a833e50-6d25-4593-b413-ceb01d516010","Type":"ContainerDied","Data":"7918e684578add833f5679b53dfe8f0e0e7d91a331da30db4c3787f3b1132d0c"} Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.395064 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7918e684578add833f5679b53dfe8f0e0e7d91a331da30db4c3787f3b1132d0c" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.395155 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.484742 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt"] Dec 02 10:41:00 crc kubenswrapper[4711]: E1202 10:41:00.485235 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a833e50-6d25-4593-b413-ceb01d516010" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.485263 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a833e50-6d25-4593-b413-ceb01d516010" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 02 10:41:00 crc kubenswrapper[4711]: E1202 10:41:00.485283 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f0d36b-3cb9-4b41-821c-9f4ca5141386" containerName="registry-server" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.485290 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f0d36b-3cb9-4b41-821c-9f4ca5141386" containerName="registry-server" Dec 02 10:41:00 crc kubenswrapper[4711]: E1202 10:41:00.485340 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f0d36b-3cb9-4b41-821c-9f4ca5141386" containerName="extract-content" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.485348 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f0d36b-3cb9-4b41-821c-9f4ca5141386" containerName="extract-content" Dec 02 10:41:00 crc kubenswrapper[4711]: E1202 10:41:00.485354 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f0d36b-3cb9-4b41-821c-9f4ca5141386" containerName="extract-utilities" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.485360 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f0d36b-3cb9-4b41-821c-9f4ca5141386" containerName="extract-utilities" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.485532 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a833e50-6d25-4593-b413-ceb01d516010" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.485555 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f0d36b-3cb9-4b41-821c-9f4ca5141386" containerName="registry-server" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.486309 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.488420 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.488899 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.488995 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.489668 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.505667 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt"] Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.523531 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6x6\" (UniqueName: \"kubernetes.io/projected/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-kube-api-access-4r6x6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt\" (UID: \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.523671 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt\" (UID: \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.524060 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt\" (UID: \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.626073 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r6x6\" (UniqueName: \"kubernetes.io/projected/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-kube-api-access-4r6x6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt\" (UID: \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.626187 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt\" (UID: \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.626239 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt\" (UID: \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.632191 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt\" (UID: \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.635052 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt\" (UID: \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.650909 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r6x6\" (UniqueName: \"kubernetes.io/projected/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-kube-api-access-4r6x6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt\" (UID: \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:41:00 crc kubenswrapper[4711]: I1202 10:41:00.808570 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:41:01 crc kubenswrapper[4711]: I1202 10:41:01.340784 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt"] Dec 02 10:41:01 crc kubenswrapper[4711]: I1202 10:41:01.406509 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" event={"ID":"1a61e5f0-3651-4c39-aec6-5c6ae688a94c","Type":"ContainerStarted","Data":"611683a177d054d64a0c64d59e2a666e30284946ade2c31adfb1bd9ed9fb3799"} Dec 02 10:41:03 crc kubenswrapper[4711]: I1202 10:41:03.432982 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" event={"ID":"1a61e5f0-3651-4c39-aec6-5c6ae688a94c","Type":"ContainerStarted","Data":"0d2b5d72f8d69b3cb0b4782a34ee0488d09ada61f7f93e60807aaf21f3ee6326"} Dec 02 10:41:03 crc kubenswrapper[4711]: I1202 10:41:03.457667 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" podStartSLOduration=2.662805336 podStartE2EDuration="3.457609348s" podCreationTimestamp="2025-12-02 10:41:00 +0000 UTC" firstStartedPulling="2025-12-02 10:41:01.351303476 +0000 UTC m=+1651.060669923" lastFinishedPulling="2025-12-02 10:41:02.146107498 +0000 UTC m=+1651.855473935" observedRunningTime="2025-12-02 10:41:03.45213827 +0000 UTC m=+1653.161504747" watchObservedRunningTime="2025-12-02 10:41:03.457609348 +0000 UTC m=+1653.166975805" Dec 02 10:41:13 crc kubenswrapper[4711]: I1202 10:41:13.078772 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:41:13 crc kubenswrapper[4711]: E1202 10:41:13.079614 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:41:25 crc kubenswrapper[4711]: I1202 10:41:25.078315 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:41:25 crc kubenswrapper[4711]: E1202 10:41:25.079147 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:41:34 crc kubenswrapper[4711]: I1202 10:41:34.701315 4711 scope.go:117] "RemoveContainer" containerID="e1e038cceb600722916e1e2b28551ee529e9a291fd9a4410af8b2f84d2b6ad6d" Dec 02 10:41:34 crc kubenswrapper[4711]: I1202 10:41:34.728790 4711 scope.go:117] "RemoveContainer" containerID="0041beba60573df95450890bce830ae365ddc2ec9e0305c0859cf45ef98c34db" Dec 02 10:41:36 crc kubenswrapper[4711]: I1202 10:41:36.078403 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:41:36 crc kubenswrapper[4711]: E1202 10:41:36.079051 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:41:40 crc kubenswrapper[4711]: I1202 10:41:40.040917 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f853-account-create-update-zk2bx"] Dec 02 10:41:40 crc kubenswrapper[4711]: I1202 10:41:40.051503 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-tsgjc"] Dec 02 10:41:40 crc kubenswrapper[4711]: I1202 10:41:40.060665 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f853-account-create-update-zk2bx"] Dec 02 10:41:40 crc kubenswrapper[4711]: I1202 10:41:40.070128 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-tsgjc"] Dec 02 10:41:41 crc kubenswrapper[4711]: I1202 10:41:41.089167 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31e044e2-2341-4cf1-8669-9beba7eec45c" path="/var/lib/kubelet/pods/31e044e2-2341-4cf1-8669-9beba7eec45c/volumes" Dec 02 10:41:41 crc kubenswrapper[4711]: I1202 10:41:41.090164 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e039b8-78fb-43a0-9ab0-7d3a6dc43198" path="/var/lib/kubelet/pods/78e039b8-78fb-43a0-9ab0-7d3a6dc43198/volumes" Dec 02 10:41:42 crc kubenswrapper[4711]: I1202 10:41:42.031611 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fv2p5"] Dec 02 10:41:42 crc kubenswrapper[4711]: I1202 10:41:42.039926 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9162-account-create-update-vnj4b"] Dec 02 10:41:42 crc kubenswrapper[4711]: I1202 10:41:42.049613 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fv2p5"] Dec 02 10:41:42 crc kubenswrapper[4711]: I1202 10:41:42.057135 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9162-account-create-update-vnj4b"] Dec 02 10:41:43 crc kubenswrapper[4711]: I1202 10:41:43.041798 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vjxm2"] Dec 02 10:41:43 crc kubenswrapper[4711]: I1202 10:41:43.050852 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d47a-account-create-update-r7rzm"] Dec 02 10:41:43 crc kubenswrapper[4711]: I1202 10:41:43.060050 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vjxm2"] Dec 02 10:41:43 crc kubenswrapper[4711]: I1202 10:41:43.068241 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d47a-account-create-update-r7rzm"] Dec 02 10:41:43 crc kubenswrapper[4711]: I1202 10:41:43.090968 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006b5b7a-0ef9-442a-9e52-462f5ef784ee" path="/var/lib/kubelet/pods/006b5b7a-0ef9-442a-9e52-462f5ef784ee/volumes" Dec 02 10:41:43 crc kubenswrapper[4711]: I1202 10:41:43.091712 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08062a70-e0c0-4bd0-b8e0-ab0a85d486a8" path="/var/lib/kubelet/pods/08062a70-e0c0-4bd0-b8e0-ab0a85d486a8/volumes" Dec 02 10:41:43 crc kubenswrapper[4711]: I1202 10:41:43.092363 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9486b59-2bf0-492f-84e1-0a832e7b366c" path="/var/lib/kubelet/pods/b9486b59-2bf0-492f-84e1-0a832e7b366c/volumes" Dec 02 10:41:43 crc kubenswrapper[4711]: I1202 10:41:43.092897 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f13da34c-a52d-4dde-8514-0ddb2cac7f4c" path="/var/lib/kubelet/pods/f13da34c-a52d-4dde-8514-0ddb2cac7f4c/volumes" Dec 02 10:41:49 crc kubenswrapper[4711]: I1202 10:41:49.079197 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:41:49 crc kubenswrapper[4711]: E1202 10:41:49.080416 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:42:00 crc kubenswrapper[4711]: I1202 10:42:00.078237 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:42:00 crc kubenswrapper[4711]: E1202 10:42:00.079027 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:42:07 crc kubenswrapper[4711]: I1202 10:42:07.047542 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jbrrj"] Dec 02 10:42:07 crc kubenswrapper[4711]: I1202 10:42:07.067051 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-vc295"] Dec 02 10:42:07 crc kubenswrapper[4711]: I1202 10:42:07.089642 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jbrrj"] Dec 02 10:42:07 crc kubenswrapper[4711]: I1202 10:42:07.099077 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9019-account-create-update-znhxs"] Dec 02 10:42:07 crc kubenswrapper[4711]: I1202 10:42:07.109847 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-vc295"] Dec 02 10:42:07 crc kubenswrapper[4711]: I1202 10:42:07.119863 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-dcca-account-create-update-w2cd7"] Dec 02 10:42:07 crc kubenswrapper[4711]: I1202 10:42:07.127500 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9019-account-create-update-znhxs"] Dec 02 10:42:07 crc kubenswrapper[4711]: I1202 10:42:07.134653 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-dcca-account-create-update-w2cd7"] Dec 02 10:42:08 crc kubenswrapper[4711]: I1202 10:42:08.026026 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ab0c-account-create-update-cwnxk"] Dec 02 10:42:08 crc kubenswrapper[4711]: I1202 10:42:08.033171 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ng955"] Dec 02 10:42:08 crc kubenswrapper[4711]: I1202 10:42:08.041750 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ab0c-account-create-update-cwnxk"] Dec 02 10:42:08 crc kubenswrapper[4711]: I1202 10:42:08.049072 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ng955"] Dec 02 10:42:09 crc kubenswrapper[4711]: I1202 10:42:09.098099 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="366b0e56-2601-4ae2-90be-958339d5bde1" path="/var/lib/kubelet/pods/366b0e56-2601-4ae2-90be-958339d5bde1/volumes" Dec 02 10:42:09 crc kubenswrapper[4711]: I1202 10:42:09.099429 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe97d90-558c-4c53-bfe6-21b93c167ede" path="/var/lib/kubelet/pods/3fe97d90-558c-4c53-bfe6-21b93c167ede/volumes" Dec 02 10:42:09 crc kubenswrapper[4711]: I1202 10:42:09.100797 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b4e096-f633-4842-a5e1-9cc10c99ff50" path="/var/lib/kubelet/pods/91b4e096-f633-4842-a5e1-9cc10c99ff50/volumes" Dec 02 10:42:09 crc kubenswrapper[4711]: I1202 10:42:09.102064 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a961b46e-6c27-4361-8dbd-7cc28d6b2a32" path="/var/lib/kubelet/pods/a961b46e-6c27-4361-8dbd-7cc28d6b2a32/volumes" Dec 02 10:42:09 crc kubenswrapper[4711]: I1202 10:42:09.104244 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce5ba869-bfa6-40fa-b81b-7b7f3490e36e" path="/var/lib/kubelet/pods/ce5ba869-bfa6-40fa-b81b-7b7f3490e36e/volumes" Dec 02 10:42:09 crc kubenswrapper[4711]: I1202 10:42:09.105378 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffdcca1e-8be0-4069-888b-08a26ffaf8b0" path="/var/lib/kubelet/pods/ffdcca1e-8be0-4069-888b-08a26ffaf8b0/volumes" Dec 02 10:42:12 crc kubenswrapper[4711]: I1202 10:42:12.078017 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:42:12 crc kubenswrapper[4711]: E1202 10:42:12.078687 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:42:16 crc kubenswrapper[4711]: I1202 10:42:16.050146 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-svtvp"] Dec 02 10:42:16 crc kubenswrapper[4711]: I1202 10:42:16.062197 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-svtvp"] Dec 02 10:42:17 crc kubenswrapper[4711]: I1202 10:42:17.090343 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b94dbaa-33c0-42b0-b71a-9af5fda1a876" path="/var/lib/kubelet/pods/2b94dbaa-33c0-42b0-b71a-9af5fda1a876/volumes" Dec 02 10:42:24 crc kubenswrapper[4711]: I1202 10:42:24.078293 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:42:24 crc kubenswrapper[4711]: E1202 10:42:24.079206 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:42:29 crc kubenswrapper[4711]: I1202 10:42:29.041070 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-s2r4g"] Dec 02 10:42:29 crc kubenswrapper[4711]: I1202 10:42:29.051036 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-s2r4g"] Dec 02 10:42:29 crc kubenswrapper[4711]: I1202 10:42:29.090878 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ce0662-b1aa-405b-a577-ebfd14385735" path="/var/lib/kubelet/pods/12ce0662-b1aa-405b-a577-ebfd14385735/volumes" Dec 02 10:42:33 crc kubenswrapper[4711]: I1202 10:42:33.364222 4711 generic.go:334] "Generic (PLEG): container finished" podID="1a61e5f0-3651-4c39-aec6-5c6ae688a94c" containerID="0d2b5d72f8d69b3cb0b4782a34ee0488d09ada61f7f93e60807aaf21f3ee6326" exitCode=0 Dec 02 10:42:33 crc kubenswrapper[4711]: I1202 10:42:33.364294 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" event={"ID":"1a61e5f0-3651-4c39-aec6-5c6ae688a94c","Type":"ContainerDied","Data":"0d2b5d72f8d69b3cb0b4782a34ee0488d09ada61f7f93e60807aaf21f3ee6326"} Dec 02 10:42:34 crc kubenswrapper[4711]: I1202 10:42:34.768336 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:42:34 crc kubenswrapper[4711]: I1202 10:42:34.819819 4711 scope.go:117] "RemoveContainer" containerID="316b3c207af77a1cdf7f8488ac89dcfb306bf4a18c8b078b30f9d746054fcf33" Dec 02 10:42:34 crc kubenswrapper[4711]: I1202 10:42:34.859160 4711 scope.go:117] "RemoveContainer" containerID="d090711390aa5aff883abb4f0488ec192a0764d961208bf53c98a4e2a59648d1" Dec 02 10:42:34 crc kubenswrapper[4711]: I1202 10:42:34.883781 4711 scope.go:117] "RemoveContainer" containerID="92eff58edfde5b823784837d2d5a0e2e4cb3e59b132d4bc70132de6e5538c8ac" Dec 02 10:42:34 crc kubenswrapper[4711]: I1202 10:42:34.904324 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-ssh-key\") pod \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\" (UID: \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\") " Dec 02 10:42:34 crc kubenswrapper[4711]: I1202 10:42:34.904543 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r6x6\" (UniqueName: \"kubernetes.io/projected/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-kube-api-access-4r6x6\") pod \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\" (UID: \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\") " Dec 02 10:42:34 crc kubenswrapper[4711]: I1202 10:42:34.904561 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-inventory\") pod \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\" (UID: \"1a61e5f0-3651-4c39-aec6-5c6ae688a94c\") " Dec 02 10:42:34 crc kubenswrapper[4711]: I1202 10:42:34.910861 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-kube-api-access-4r6x6" (OuterVolumeSpecName: "kube-api-access-4r6x6") pod "1a61e5f0-3651-4c39-aec6-5c6ae688a94c" (UID: "1a61e5f0-3651-4c39-aec6-5c6ae688a94c"). InnerVolumeSpecName "kube-api-access-4r6x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:42:34 crc kubenswrapper[4711]: I1202 10:42:34.931356 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1a61e5f0-3651-4c39-aec6-5c6ae688a94c" (UID: "1a61e5f0-3651-4c39-aec6-5c6ae688a94c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:42:34 crc kubenswrapper[4711]: I1202 10:42:34.933934 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-inventory" (OuterVolumeSpecName: "inventory") pod "1a61e5f0-3651-4c39-aec6-5c6ae688a94c" (UID: "1a61e5f0-3651-4c39-aec6-5c6ae688a94c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:42:34 crc kubenswrapper[4711]: I1202 10:42:34.995846 4711 scope.go:117] "RemoveContainer" containerID="a89f89fb79ccf8741e95403278c76345fff6898f9174f73c2f7ed734d89b333f" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.007934 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r6x6\" (UniqueName: \"kubernetes.io/projected/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-kube-api-access-4r6x6\") on node \"crc\" DevicePath \"\"" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.008115 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.008182 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a61e5f0-3651-4c39-aec6-5c6ae688a94c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.017478 4711 scope.go:117] "RemoveContainer" containerID="26b2b6bfb2c1e4df74b5657ee53508af2a187aea10cc17c99baf789fabdea733" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.035162 4711 scope.go:117] "RemoveContainer" containerID="47fa764c7333bee6bc1964e356ee61dbfabb3e9825a022b42293542c1003735d" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.055506 4711 scope.go:117] "RemoveContainer" containerID="1ccdd062e522abbec1944a8ee7148a542590712f2e9d5e34803bb78ee2f462b8" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.078481 4711 scope.go:117] "RemoveContainer" containerID="cb3977ccc770ae471abbd9e993cb93fda494ef6e15702b11add2b16bec094fed" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.116529 4711 scope.go:117] "RemoveContainer" containerID="a23865f833befa176ed225d08d57883ae1c18cbd1a75d9a79dde95c3cece8ad6" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.134099 4711 scope.go:117] "RemoveContainer" containerID="869fbab40f7610a9667a0116670049f9ec34d24319fd225cc0532b1b6e4d438b" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.154578 4711 scope.go:117] "RemoveContainer" containerID="0184e7874cd5edf92cd541ddd24dd35c294a13b384a4735361fd871b1cf17a95" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.173503 4711 scope.go:117] "RemoveContainer" containerID="24d1a5b493a52689447976f3350264a9cd16de866ad7a1cbd07b6cf9f89d4626" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.199613 4711 scope.go:117] "RemoveContainer" containerID="56e7c9ca0248e1785a00316996609f3abee131645d899af82cfda258f40629dd" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.221835 4711 scope.go:117] "RemoveContainer" containerID="67d2e19949338fb061366b68700e136150a08318a59df57aec79e7744264e0ec" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.397872 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" event={"ID":"1a61e5f0-3651-4c39-aec6-5c6ae688a94c","Type":"ContainerDied","Data":"611683a177d054d64a0c64d59e2a666e30284946ade2c31adfb1bd9ed9fb3799"} Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.398145 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.398352 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="611683a177d054d64a0c64d59e2a666e30284946ade2c31adfb1bd9ed9fb3799" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.478488 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq"] Dec 02 10:42:35 crc kubenswrapper[4711]: E1202 10:42:35.479048 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a61e5f0-3651-4c39-aec6-5c6ae688a94c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.479073 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a61e5f0-3651-4c39-aec6-5c6ae688a94c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.479272 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a61e5f0-3651-4c39-aec6-5c6ae688a94c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.480047 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.483205 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq"] Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.485817 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.485817 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.486402 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.486455 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.621819 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9425d80-55ad-4f08-acd8-4389676e9b71-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq\" (UID: \"c9425d80-55ad-4f08-acd8-4389676e9b71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.621886 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9425d80-55ad-4f08-acd8-4389676e9b71-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq\" (UID: \"c9425d80-55ad-4f08-acd8-4389676e9b71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.621914 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwzwk\" (UniqueName: \"kubernetes.io/projected/c9425d80-55ad-4f08-acd8-4389676e9b71-kube-api-access-qwzwk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq\" (UID: \"c9425d80-55ad-4f08-acd8-4389676e9b71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.723521 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9425d80-55ad-4f08-acd8-4389676e9b71-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq\" (UID: \"c9425d80-55ad-4f08-acd8-4389676e9b71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.723639 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9425d80-55ad-4f08-acd8-4389676e9b71-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq\" (UID: \"c9425d80-55ad-4f08-acd8-4389676e9b71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.723692 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwzwk\" (UniqueName: \"kubernetes.io/projected/c9425d80-55ad-4f08-acd8-4389676e9b71-kube-api-access-qwzwk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq\" (UID: \"c9425d80-55ad-4f08-acd8-4389676e9b71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.728899 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9425d80-55ad-4f08-acd8-4389676e9b71-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq\" (UID: \"c9425d80-55ad-4f08-acd8-4389676e9b71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.729043 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9425d80-55ad-4f08-acd8-4389676e9b71-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq\" (UID: \"c9425d80-55ad-4f08-acd8-4389676e9b71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.755508 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwzwk\" (UniqueName: \"kubernetes.io/projected/c9425d80-55ad-4f08-acd8-4389676e9b71-kube-api-access-qwzwk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq\" (UID: \"c9425d80-55ad-4f08-acd8-4389676e9b71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:42:35 crc kubenswrapper[4711]: I1202 10:42:35.798223 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:42:36 crc kubenswrapper[4711]: I1202 10:42:36.362139 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq"] Dec 02 10:42:36 crc kubenswrapper[4711]: I1202 10:42:36.414507 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" event={"ID":"c9425d80-55ad-4f08-acd8-4389676e9b71","Type":"ContainerStarted","Data":"3bf36365a263b72008633b4f889abc2bc8bd8b23c92b7769cf23d15dd0eb12cf"} Dec 02 10:42:38 crc kubenswrapper[4711]: I1202 10:42:38.078727 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:42:38 crc kubenswrapper[4711]: E1202 10:42:38.079406 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:42:38 crc kubenswrapper[4711]: I1202 10:42:38.444974 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" event={"ID":"c9425d80-55ad-4f08-acd8-4389676e9b71","Type":"ContainerStarted","Data":"4d224a97abd9be7a37de75999753a3782b3efb1c67bb7d214b88f0ed72920083"} Dec 02 10:42:53 crc kubenswrapper[4711]: I1202 10:42:53.078947 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:42:53 crc kubenswrapper[4711]: E1202 10:42:53.079940 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:43:03 crc kubenswrapper[4711]: I1202 10:43:03.057679 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" podStartSLOduration=27.070389287 podStartE2EDuration="28.057601552s" podCreationTimestamp="2025-12-02 10:42:35 +0000 UTC" firstStartedPulling="2025-12-02 10:42:36.364069006 +0000 UTC m=+1746.073435463" lastFinishedPulling="2025-12-02 10:42:37.351281271 +0000 UTC m=+1747.060647728" observedRunningTime="2025-12-02 10:42:38.463642487 +0000 UTC m=+1748.173008974" watchObservedRunningTime="2025-12-02 10:43:03.057601552 +0000 UTC m=+1772.766968079" Dec 02 10:43:03 crc kubenswrapper[4711]: I1202 10:43:03.062587 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-rllxq"] Dec 02 10:43:03 crc kubenswrapper[4711]: I1202 10:43:03.073833 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-rllxq"] Dec 02 10:43:03 crc kubenswrapper[4711]: I1202 10:43:03.096746 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e7dd62-8534-48ac-9b10-3cafac8b1192" path="/var/lib/kubelet/pods/44e7dd62-8534-48ac-9b10-3cafac8b1192/volumes" Dec 02 10:43:07 crc kubenswrapper[4711]: I1202 10:43:07.078681 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:43:07 crc kubenswrapper[4711]: E1202 10:43:07.079333 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:43:15 crc kubenswrapper[4711]: I1202 10:43:15.056276 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xfj2j"] Dec 02 10:43:15 crc kubenswrapper[4711]: I1202 10:43:15.067258 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ss597"] Dec 02 10:43:15 crc kubenswrapper[4711]: I1202 10:43:15.076922 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pk6jj"] Dec 02 10:43:15 crc kubenswrapper[4711]: I1202 10:43:15.097106 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ss597"] Dec 02 10:43:15 crc kubenswrapper[4711]: I1202 10:43:15.099025 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xfj2j"] Dec 02 10:43:15 crc kubenswrapper[4711]: I1202 10:43:15.105454 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pk6jj"] Dec 02 10:43:17 crc kubenswrapper[4711]: I1202 10:43:17.111092 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="426ff483-f882-4d91-b5da-bab147d2886d" path="/var/lib/kubelet/pods/426ff483-f882-4d91-b5da-bab147d2886d/volumes" Dec 02 10:43:17 crc kubenswrapper[4711]: I1202 10:43:17.112903 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5266d7e0-bd1b-4266-b8eb-af6080873ad5" path="/var/lib/kubelet/pods/5266d7e0-bd1b-4266-b8eb-af6080873ad5/volumes" Dec 02 10:43:17 crc kubenswrapper[4711]: I1202 10:43:17.113684 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708582b5-ed1b-43e9-959a-482979700291" path="/var/lib/kubelet/pods/708582b5-ed1b-43e9-959a-482979700291/volumes" Dec 02 10:43:20 crc kubenswrapper[4711]: I1202 10:43:20.079744 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:43:20 crc kubenswrapper[4711]: E1202 10:43:20.080682 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:43:30 crc kubenswrapper[4711]: I1202 10:43:30.032591 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-hhnhk"] Dec 02 10:43:30 crc kubenswrapper[4711]: I1202 10:43:30.044453 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-hhnhk"] Dec 02 10:43:31 crc kubenswrapper[4711]: I1202 10:43:31.096708 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c36d7741-4744-4076-ad79-2cd1aca48cec" path="/var/lib/kubelet/pods/c36d7741-4744-4076-ad79-2cd1aca48cec/volumes" Dec 02 10:43:33 crc kubenswrapper[4711]: I1202 10:43:33.079501 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:43:33 crc kubenswrapper[4711]: E1202 10:43:33.080519 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:43:35 crc kubenswrapper[4711]: I1202 10:43:35.523619 4711 scope.go:117] "RemoveContainer" containerID="6cac01906a9791b422daf53ba8810283b460891ae2b777371dc1dca71ca8866c" Dec 02 10:43:35 crc kubenswrapper[4711]: I1202 10:43:35.554027 4711 scope.go:117] "RemoveContainer" containerID="5447f2d26caf26f18e97fa62c49a031feb64bd956fbad5d280640e5eb96be6bf" Dec 02 10:43:35 crc kubenswrapper[4711]: I1202 10:43:35.619150 4711 scope.go:117] "RemoveContainer" containerID="49b8995f76493f4605ebe7c93ebc2d87e990271c794df311ee5a27f6ab3b0e2f" Dec 02 10:43:35 crc kubenswrapper[4711]: I1202 10:43:35.655727 4711 scope.go:117] "RemoveContainer" containerID="847b4ebc5d451fed55329ad84a6c8ab342e34e014b63d8e09a820217604ba908" Dec 02 10:43:35 crc kubenswrapper[4711]: I1202 10:43:35.685684 4711 scope.go:117] "RemoveContainer" containerID="d8df2dc80ebac905a5058820497382d476f98341e2433c0227b7a47c6d1c8abc" Dec 02 10:43:38 crc kubenswrapper[4711]: I1202 10:43:38.892717 4711 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-8455cffcc7-gvzs8" podUID="4bb0ebbe-23dd-4970-bc78-799616ef2e21" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 02 10:43:45 crc kubenswrapper[4711]: I1202 10:43:45.078704 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:43:45 crc kubenswrapper[4711]: E1202 10:43:45.079460 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:43:46 crc kubenswrapper[4711]: I1202 10:43:46.138544 4711 generic.go:334] "Generic (PLEG): container finished" podID="c9425d80-55ad-4f08-acd8-4389676e9b71" containerID="4d224a97abd9be7a37de75999753a3782b3efb1c67bb7d214b88f0ed72920083" exitCode=0 Dec 02 10:43:46 crc kubenswrapper[4711]: I1202 10:43:46.138581 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" event={"ID":"c9425d80-55ad-4f08-acd8-4389676e9b71","Type":"ContainerDied","Data":"4d224a97abd9be7a37de75999753a3782b3efb1c67bb7d214b88f0ed72920083"} Dec 02 10:43:47 crc kubenswrapper[4711]: I1202 10:43:47.570470 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:43:47 crc kubenswrapper[4711]: I1202 10:43:47.621270 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9425d80-55ad-4f08-acd8-4389676e9b71-ssh-key\") pod \"c9425d80-55ad-4f08-acd8-4389676e9b71\" (UID: \"c9425d80-55ad-4f08-acd8-4389676e9b71\") " Dec 02 10:43:47 crc kubenswrapper[4711]: I1202 10:43:47.621441 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9425d80-55ad-4f08-acd8-4389676e9b71-inventory\") pod \"c9425d80-55ad-4f08-acd8-4389676e9b71\" (UID: \"c9425d80-55ad-4f08-acd8-4389676e9b71\") " Dec 02 10:43:47 crc kubenswrapper[4711]: I1202 10:43:47.621621 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwzwk\" (UniqueName: \"kubernetes.io/projected/c9425d80-55ad-4f08-acd8-4389676e9b71-kube-api-access-qwzwk\") pod \"c9425d80-55ad-4f08-acd8-4389676e9b71\" (UID: \"c9425d80-55ad-4f08-acd8-4389676e9b71\") " Dec 02 10:43:47 crc kubenswrapper[4711]: I1202 10:43:47.626783 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9425d80-55ad-4f08-acd8-4389676e9b71-kube-api-access-qwzwk" (OuterVolumeSpecName: "kube-api-access-qwzwk") pod "c9425d80-55ad-4f08-acd8-4389676e9b71" (UID: "c9425d80-55ad-4f08-acd8-4389676e9b71"). InnerVolumeSpecName "kube-api-access-qwzwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:43:47 crc kubenswrapper[4711]: I1202 10:43:47.645931 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9425d80-55ad-4f08-acd8-4389676e9b71-inventory" (OuterVolumeSpecName: "inventory") pod "c9425d80-55ad-4f08-acd8-4389676e9b71" (UID: "c9425d80-55ad-4f08-acd8-4389676e9b71"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:43:47 crc kubenswrapper[4711]: I1202 10:43:47.646174 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9425d80-55ad-4f08-acd8-4389676e9b71-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c9425d80-55ad-4f08-acd8-4389676e9b71" (UID: "c9425d80-55ad-4f08-acd8-4389676e9b71"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:43:47 crc kubenswrapper[4711]: I1202 10:43:47.723789 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9425d80-55ad-4f08-acd8-4389676e9b71-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:43:47 crc kubenswrapper[4711]: I1202 10:43:47.723819 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9425d80-55ad-4f08-acd8-4389676e9b71-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:43:47 crc kubenswrapper[4711]: I1202 10:43:47.723830 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwzwk\" (UniqueName: \"kubernetes.io/projected/c9425d80-55ad-4f08-acd8-4389676e9b71-kube-api-access-qwzwk\") on node \"crc\" DevicePath \"\"" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.165151 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" event={"ID":"c9425d80-55ad-4f08-acd8-4389676e9b71","Type":"ContainerDied","Data":"3bf36365a263b72008633b4f889abc2bc8bd8b23c92b7769cf23d15dd0eb12cf"} Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.165203 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.165260 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf36365a263b72008633b4f889abc2bc8bd8b23c92b7769cf23d15dd0eb12cf" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.256217 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk"] Dec 02 10:43:48 crc kubenswrapper[4711]: E1202 10:43:48.256648 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9425d80-55ad-4f08-acd8-4389676e9b71" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.256674 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9425d80-55ad-4f08-acd8-4389676e9b71" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.256868 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9425d80-55ad-4f08-acd8-4389676e9b71" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.257519 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.259579 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.259636 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.259732 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.260195 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.277580 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk"] Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.334290 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrxx6\" (UniqueName: \"kubernetes.io/projected/c44d97e0-717c-4337-910f-68b93cc653a7-kube-api-access-wrxx6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55wpk\" (UID: \"c44d97e0-717c-4337-910f-68b93cc653a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.334331 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c44d97e0-717c-4337-910f-68b93cc653a7-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55wpk\" (UID: \"c44d97e0-717c-4337-910f-68b93cc653a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.334451 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44d97e0-717c-4337-910f-68b93cc653a7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55wpk\" (UID: \"c44d97e0-717c-4337-910f-68b93cc653a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.436858 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrxx6\" (UniqueName: \"kubernetes.io/projected/c44d97e0-717c-4337-910f-68b93cc653a7-kube-api-access-wrxx6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55wpk\" (UID: \"c44d97e0-717c-4337-910f-68b93cc653a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.436996 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c44d97e0-717c-4337-910f-68b93cc653a7-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55wpk\" (UID: \"c44d97e0-717c-4337-910f-68b93cc653a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.437103 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44d97e0-717c-4337-910f-68b93cc653a7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55wpk\" (UID: \"c44d97e0-717c-4337-910f-68b93cc653a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.443895 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44d97e0-717c-4337-910f-68b93cc653a7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55wpk\" (UID: \"c44d97e0-717c-4337-910f-68b93cc653a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.454075 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c44d97e0-717c-4337-910f-68b93cc653a7-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55wpk\" (UID: \"c44d97e0-717c-4337-910f-68b93cc653a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.455224 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrxx6\" (UniqueName: \"kubernetes.io/projected/c44d97e0-717c-4337-910f-68b93cc653a7-kube-api-access-wrxx6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55wpk\" (UID: \"c44d97e0-717c-4337-910f-68b93cc653a7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:48 crc kubenswrapper[4711]: I1202 10:43:48.575585 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:49 crc kubenswrapper[4711]: I1202 10:43:49.098101 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk"] Dec 02 10:43:49 crc kubenswrapper[4711]: I1202 10:43:49.176140 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" event={"ID":"c44d97e0-717c-4337-910f-68b93cc653a7","Type":"ContainerStarted","Data":"5e58e9376206eb48fcae57a1f3e1c2d086f8be107cce8d26938ad6ba09a4f1b5"} Dec 02 10:43:50 crc kubenswrapper[4711]: I1202 10:43:50.190481 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" event={"ID":"c44d97e0-717c-4337-910f-68b93cc653a7","Type":"ContainerStarted","Data":"4633611ac654e741e349c9c1e65711658baac1adb757d55963d23b49c1ec95b5"} Dec 02 10:43:55 crc kubenswrapper[4711]: I1202 10:43:55.239479 4711 generic.go:334] "Generic (PLEG): container finished" podID="c44d97e0-717c-4337-910f-68b93cc653a7" containerID="4633611ac654e741e349c9c1e65711658baac1adb757d55963d23b49c1ec95b5" exitCode=0 Dec 02 10:43:55 crc kubenswrapper[4711]: I1202 10:43:55.239564 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" event={"ID":"c44d97e0-717c-4337-910f-68b93cc653a7","Type":"ContainerDied","Data":"4633611ac654e741e349c9c1e65711658baac1adb757d55963d23b49c1ec95b5"} Dec 02 10:43:56 crc kubenswrapper[4711]: I1202 10:43:56.674719 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:56 crc kubenswrapper[4711]: I1202 10:43:56.817057 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44d97e0-717c-4337-910f-68b93cc653a7-inventory\") pod \"c44d97e0-717c-4337-910f-68b93cc653a7\" (UID: \"c44d97e0-717c-4337-910f-68b93cc653a7\") " Dec 02 10:43:56 crc kubenswrapper[4711]: I1202 10:43:56.817243 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c44d97e0-717c-4337-910f-68b93cc653a7-ssh-key\") pod \"c44d97e0-717c-4337-910f-68b93cc653a7\" (UID: \"c44d97e0-717c-4337-910f-68b93cc653a7\") " Dec 02 10:43:56 crc kubenswrapper[4711]: I1202 10:43:56.817365 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrxx6\" (UniqueName: \"kubernetes.io/projected/c44d97e0-717c-4337-910f-68b93cc653a7-kube-api-access-wrxx6\") pod \"c44d97e0-717c-4337-910f-68b93cc653a7\" (UID: \"c44d97e0-717c-4337-910f-68b93cc653a7\") " Dec 02 10:43:56 crc kubenswrapper[4711]: I1202 10:43:56.826461 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44d97e0-717c-4337-910f-68b93cc653a7-kube-api-access-wrxx6" (OuterVolumeSpecName: "kube-api-access-wrxx6") pod "c44d97e0-717c-4337-910f-68b93cc653a7" (UID: "c44d97e0-717c-4337-910f-68b93cc653a7"). InnerVolumeSpecName "kube-api-access-wrxx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:43:56 crc kubenswrapper[4711]: I1202 10:43:56.850828 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44d97e0-717c-4337-910f-68b93cc653a7-inventory" (OuterVolumeSpecName: "inventory") pod "c44d97e0-717c-4337-910f-68b93cc653a7" (UID: "c44d97e0-717c-4337-910f-68b93cc653a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:43:56 crc kubenswrapper[4711]: I1202 10:43:56.860302 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44d97e0-717c-4337-910f-68b93cc653a7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c44d97e0-717c-4337-910f-68b93cc653a7" (UID: "c44d97e0-717c-4337-910f-68b93cc653a7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:43:56 crc kubenswrapper[4711]: I1202 10:43:56.920507 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrxx6\" (UniqueName: \"kubernetes.io/projected/c44d97e0-717c-4337-910f-68b93cc653a7-kube-api-access-wrxx6\") on node \"crc\" DevicePath \"\"" Dec 02 10:43:56 crc kubenswrapper[4711]: I1202 10:43:56.920726 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44d97e0-717c-4337-910f-68b93cc653a7-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:43:56 crc kubenswrapper[4711]: I1202 10:43:56.920737 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c44d97e0-717c-4337-910f-68b93cc653a7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.259755 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" event={"ID":"c44d97e0-717c-4337-910f-68b93cc653a7","Type":"ContainerDied","Data":"5e58e9376206eb48fcae57a1f3e1c2d086f8be107cce8d26938ad6ba09a4f1b5"} Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.259812 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e58e9376206eb48fcae57a1f3e1c2d086f8be107cce8d26938ad6ba09a4f1b5" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.259809 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55wpk" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.326702 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz"] Dec 02 10:43:57 crc kubenswrapper[4711]: E1202 10:43:57.327188 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44d97e0-717c-4337-910f-68b93cc653a7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.327211 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44d97e0-717c-4337-910f-68b93cc653a7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.327398 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44d97e0-717c-4337-910f-68b93cc653a7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.328089 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.331501 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.331660 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.331675 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.334004 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.336349 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz"] Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.428680 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w4jpz\" (UID: \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.428767 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99hdp\" (UniqueName: \"kubernetes.io/projected/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-kube-api-access-99hdp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w4jpz\" (UID: \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.428831 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w4jpz\" (UID: \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.530548 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99hdp\" (UniqueName: \"kubernetes.io/projected/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-kube-api-access-99hdp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w4jpz\" (UID: \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.530699 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w4jpz\" (UID: \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.530826 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w4jpz\" (UID: \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.550025 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w4jpz\" (UID: \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.551601 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99hdp\" (UniqueName: \"kubernetes.io/projected/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-kube-api-access-99hdp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w4jpz\" (UID: \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.552258 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w4jpz\" (UID: \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:43:57 crc kubenswrapper[4711]: I1202 10:43:57.650206 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:43:58 crc kubenswrapper[4711]: I1202 10:43:58.173946 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz"] Dec 02 10:43:58 crc kubenswrapper[4711]: I1202 10:43:58.267546 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" event={"ID":"17177c8c-c071-4484-b8e6-2b3c49e8a3e4","Type":"ContainerStarted","Data":"56e5a85577b78866ba0548ca1a9bd021a81335514fc45db8f2337c576b0092f1"} Dec 02 10:43:59 crc kubenswrapper[4711]: I1202 10:43:59.279976 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" event={"ID":"17177c8c-c071-4484-b8e6-2b3c49e8a3e4","Type":"ContainerStarted","Data":"f04c85b13c3002d57f4e6c4f728498cc4d9b41f719f3d6b4db82b470928488c1"} Dec 02 10:43:59 crc kubenswrapper[4711]: I1202 10:43:59.303621 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" podStartSLOduration=1.836089133 podStartE2EDuration="2.30357772s" podCreationTimestamp="2025-12-02 10:43:57 +0000 UTC" firstStartedPulling="2025-12-02 10:43:58.173632957 +0000 UTC m=+1827.882999404" lastFinishedPulling="2025-12-02 10:43:58.641121534 +0000 UTC m=+1828.350487991" observedRunningTime="2025-12-02 10:43:59.301704649 +0000 UTC m=+1829.011071116" watchObservedRunningTime="2025-12-02 10:43:59.30357772 +0000 UTC m=+1829.012944177" Dec 02 10:44:00 crc kubenswrapper[4711]: I1202 10:44:00.078992 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:44:00 crc kubenswrapper[4711]: E1202 10:44:00.079320 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:44:07 crc kubenswrapper[4711]: I1202 10:44:07.049006 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-ctfhf"] Dec 02 10:44:07 crc kubenswrapper[4711]: I1202 10:44:07.068991 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wghts"] Dec 02 10:44:07 crc kubenswrapper[4711]: I1202 10:44:07.088395 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-ctfhf"] Dec 02 10:44:07 crc kubenswrapper[4711]: I1202 10:44:07.088437 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wghts"] Dec 02 10:44:08 crc kubenswrapper[4711]: I1202 10:44:08.042337 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-725c-account-create-update-ncz7n"] Dec 02 10:44:08 crc kubenswrapper[4711]: I1202 10:44:08.054028 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-845d-account-create-update-26jgv"] Dec 02 10:44:08 crc kubenswrapper[4711]: I1202 10:44:08.061583 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-q64zj"] Dec 02 10:44:08 crc kubenswrapper[4711]: I1202 10:44:08.068685 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-725c-account-create-update-ncz7n"] Dec 02 10:44:08 crc kubenswrapper[4711]: I1202 10:44:08.076340 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6e60-account-create-update-mfx7j"] Dec 02 10:44:08 crc kubenswrapper[4711]: I1202 10:44:08.085151 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-q64zj"] Dec 02 10:44:08 crc kubenswrapper[4711]: I1202 10:44:08.092608 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-845d-account-create-update-26jgv"] Dec 02 10:44:08 crc kubenswrapper[4711]: I1202 10:44:08.101251 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6e60-account-create-update-mfx7j"] Dec 02 10:44:09 crc kubenswrapper[4711]: I1202 10:44:09.097262 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="078d4918-1bcb-4825-a5e4-4a4130593668" path="/var/lib/kubelet/pods/078d4918-1bcb-4825-a5e4-4a4130593668/volumes" Dec 02 10:44:09 crc kubenswrapper[4711]: I1202 10:44:09.098217 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f1c4aab-5848-4313-a0b0-b95d2a2f660d" path="/var/lib/kubelet/pods/1f1c4aab-5848-4313-a0b0-b95d2a2f660d/volumes" Dec 02 10:44:09 crc kubenswrapper[4711]: I1202 10:44:09.098719 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f23d852-29f7-4a94-8fb2-05115f135ac3" path="/var/lib/kubelet/pods/2f23d852-29f7-4a94-8fb2-05115f135ac3/volumes" Dec 02 10:44:09 crc kubenswrapper[4711]: I1202 10:44:09.099354 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f697b83-8112-40d8-a328-bfc157ffcdde" path="/var/lib/kubelet/pods/5f697b83-8112-40d8-a328-bfc157ffcdde/volumes" Dec 02 10:44:09 crc kubenswrapper[4711]: I1202 10:44:09.101543 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a610c00-47f7-4347-8890-f5392ae78555" path="/var/lib/kubelet/pods/8a610c00-47f7-4347-8890-f5392ae78555/volumes" Dec 02 10:44:09 crc kubenswrapper[4711]: I1202 10:44:09.102892 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e05d72c3-0be9-40ae-aa5d-f24c5b02adbf" path="/var/lib/kubelet/pods/e05d72c3-0be9-40ae-aa5d-f24c5b02adbf/volumes" Dec 02 10:44:11 crc kubenswrapper[4711]: I1202 10:44:11.095221 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:44:11 crc kubenswrapper[4711]: E1202 10:44:11.095671 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:44:26 crc kubenswrapper[4711]: I1202 10:44:26.079166 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:44:26 crc kubenswrapper[4711]: I1202 10:44:26.543219 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"90d7b159bfc5894ef3714c163745ef7fbdd8eca8ae697756aad37eb187f934b6"} Dec 02 10:44:32 crc kubenswrapper[4711]: I1202 10:44:32.049125 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rsnn9"] Dec 02 10:44:32 crc kubenswrapper[4711]: I1202 10:44:32.057440 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rsnn9"] Dec 02 10:44:33 crc kubenswrapper[4711]: I1202 10:44:33.089151 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c592ed4-2527-4546-8500-5dfc26ee5dca" path="/var/lib/kubelet/pods/6c592ed4-2527-4546-8500-5dfc26ee5dca/volumes" Dec 02 10:44:35 crc kubenswrapper[4711]: I1202 10:44:35.852240 4711 scope.go:117] "RemoveContainer" containerID="4f2026f9819e0f7714463adaef34a8a489024f3cc7c1fc2a82fb712506e043ae" Dec 02 10:44:35 crc kubenswrapper[4711]: I1202 10:44:35.887279 4711 scope.go:117] "RemoveContainer" containerID="e7582976081adb5b24e6a466af25e03646af7b26ab294f9be45b6a7c1d61bc48" Dec 02 10:44:36 crc kubenswrapper[4711]: I1202 10:44:36.038322 4711 scope.go:117] "RemoveContainer" containerID="459858a58d3af25ea8ff08e1be5694b1437081792db913adae01b827e977e039" Dec 02 10:44:36 crc kubenswrapper[4711]: I1202 10:44:36.084425 4711 scope.go:117] "RemoveContainer" containerID="da0fdf789d92ae9490e4afdf53b0d5eb2f73f32309daacf9b6504e17fe96a7df" Dec 02 10:44:36 crc kubenswrapper[4711]: I1202 10:44:36.117686 4711 scope.go:117] "RemoveContainer" containerID="983849341a8cb019b6b6b94dc04813682d57af7c0826a5d4d952d0c0f506ae5b" Dec 02 10:44:36 crc kubenswrapper[4711]: I1202 10:44:36.170126 4711 scope.go:117] "RemoveContainer" containerID="b0d8f709a1180a24670476673d026c9be7797a5b21c52db35258281e221088ad" Dec 02 10:44:36 crc kubenswrapper[4711]: I1202 10:44:36.223857 4711 scope.go:117] "RemoveContainer" containerID="a06aaad519ffb24a41da223839471d01c0b1e3ebb9ecf9c1070636b43a7ae778" Dec 02 10:44:36 crc kubenswrapper[4711]: I1202 10:44:36.682669 4711 generic.go:334] "Generic (PLEG): container finished" podID="17177c8c-c071-4484-b8e6-2b3c49e8a3e4" containerID="f04c85b13c3002d57f4e6c4f728498cc4d9b41f719f3d6b4db82b470928488c1" exitCode=0 Dec 02 10:44:36 crc kubenswrapper[4711]: I1202 10:44:36.682759 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" event={"ID":"17177c8c-c071-4484-b8e6-2b3c49e8a3e4","Type":"ContainerDied","Data":"f04c85b13c3002d57f4e6c4f728498cc4d9b41f719f3d6b4db82b470928488c1"} Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.197228 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.266216 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99hdp\" (UniqueName: \"kubernetes.io/projected/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-kube-api-access-99hdp\") pod \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\" (UID: \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\") " Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.266396 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-ssh-key\") pod \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\" (UID: \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\") " Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.266567 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-inventory\") pod \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\" (UID: \"17177c8c-c071-4484-b8e6-2b3c49e8a3e4\") " Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.274718 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-kube-api-access-99hdp" (OuterVolumeSpecName: "kube-api-access-99hdp") pod "17177c8c-c071-4484-b8e6-2b3c49e8a3e4" (UID: "17177c8c-c071-4484-b8e6-2b3c49e8a3e4"). InnerVolumeSpecName "kube-api-access-99hdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.304822 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-inventory" (OuterVolumeSpecName: "inventory") pod "17177c8c-c071-4484-b8e6-2b3c49e8a3e4" (UID: "17177c8c-c071-4484-b8e6-2b3c49e8a3e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.307017 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "17177c8c-c071-4484-b8e6-2b3c49e8a3e4" (UID: "17177c8c-c071-4484-b8e6-2b3c49e8a3e4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.369565 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.369614 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.369634 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99hdp\" (UniqueName: \"kubernetes.io/projected/17177c8c-c071-4484-b8e6-2b3c49e8a3e4-kube-api-access-99hdp\") on node \"crc\" DevicePath \"\"" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.704926 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" event={"ID":"17177c8c-c071-4484-b8e6-2b3c49e8a3e4","Type":"ContainerDied","Data":"56e5a85577b78866ba0548ca1a9bd021a81335514fc45db8f2337c576b0092f1"} Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.705019 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56e5a85577b78866ba0548ca1a9bd021a81335514fc45db8f2337c576b0092f1" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.705051 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w4jpz" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.873604 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5"] Dec 02 10:44:38 crc kubenswrapper[4711]: E1202 10:44:38.874361 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17177c8c-c071-4484-b8e6-2b3c49e8a3e4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.874410 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="17177c8c-c071-4484-b8e6-2b3c49e8a3e4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.874690 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="17177c8c-c071-4484-b8e6-2b3c49e8a3e4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.875904 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.878678 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.878822 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.879626 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.880454 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:44:38 crc kubenswrapper[4711]: I1202 10:44:38.889286 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5"] Dec 02 10:44:39 crc kubenswrapper[4711]: I1202 10:44:39.080868 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cc0043c-689a-4c2f-b70f-a4a3c5344385-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5\" (UID: \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:44:39 crc kubenswrapper[4711]: I1202 10:44:39.080980 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb2cz\" (UniqueName: \"kubernetes.io/projected/6cc0043c-689a-4c2f-b70f-a4a3c5344385-kube-api-access-zb2cz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5\" (UID: \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:44:39 crc kubenswrapper[4711]: I1202 10:44:39.081292 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cc0043c-689a-4c2f-b70f-a4a3c5344385-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5\" (UID: \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:44:39 crc kubenswrapper[4711]: I1202 10:44:39.185141 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cc0043c-689a-4c2f-b70f-a4a3c5344385-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5\" (UID: \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:44:39 crc kubenswrapper[4711]: I1202 10:44:39.186393 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb2cz\" (UniqueName: \"kubernetes.io/projected/6cc0043c-689a-4c2f-b70f-a4a3c5344385-kube-api-access-zb2cz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5\" (UID: \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:44:39 crc kubenswrapper[4711]: I1202 10:44:39.186652 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cc0043c-689a-4c2f-b70f-a4a3c5344385-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5\" (UID: \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:44:39 crc kubenswrapper[4711]: I1202 10:44:39.193747 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cc0043c-689a-4c2f-b70f-a4a3c5344385-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5\" (UID: \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:44:39 crc kubenswrapper[4711]: I1202 10:44:39.199908 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cc0043c-689a-4c2f-b70f-a4a3c5344385-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5\" (UID: \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:44:39 crc kubenswrapper[4711]: I1202 10:44:39.219633 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb2cz\" (UniqueName: \"kubernetes.io/projected/6cc0043c-689a-4c2f-b70f-a4a3c5344385-kube-api-access-zb2cz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5\" (UID: \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:44:39 crc kubenswrapper[4711]: I1202 10:44:39.498000 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:44:40 crc kubenswrapper[4711]: I1202 10:44:40.106454 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5"] Dec 02 10:44:40 crc kubenswrapper[4711]: I1202 10:44:40.729081 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" event={"ID":"6cc0043c-689a-4c2f-b70f-a4a3c5344385","Type":"ContainerStarted","Data":"03fb4fc07c027f25c29fa409e0f97972e9c118f93ec86e71e597b927671f1645"} Dec 02 10:44:41 crc kubenswrapper[4711]: I1202 10:44:41.737922 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" event={"ID":"6cc0043c-689a-4c2f-b70f-a4a3c5344385","Type":"ContainerStarted","Data":"b0fc4de81267d738ed199bfd7db193bc6dc8a94c93deb357ec92686a850a732f"} Dec 02 10:44:41 crc kubenswrapper[4711]: I1202 10:44:41.767408 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" podStartSLOduration=3.226178009 podStartE2EDuration="3.767338331s" podCreationTimestamp="2025-12-02 10:44:38 +0000 UTC" firstStartedPulling="2025-12-02 10:44:40.115982641 +0000 UTC m=+1869.825349128" lastFinishedPulling="2025-12-02 10:44:40.657142863 +0000 UTC m=+1870.366509450" observedRunningTime="2025-12-02 10:44:41.753082906 +0000 UTC m=+1871.462449403" watchObservedRunningTime="2025-12-02 10:44:41.767338331 +0000 UTC m=+1871.476704788" Dec 02 10:44:54 crc kubenswrapper[4711]: I1202 10:44:54.058989 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6z9lh"] Dec 02 10:44:54 crc kubenswrapper[4711]: I1202 10:44:54.070145 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-vlft6"] Dec 02 10:44:54 crc kubenswrapper[4711]: I1202 10:44:54.081283 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-vlft6"] Dec 02 10:44:54 crc kubenswrapper[4711]: I1202 10:44:54.088989 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6z9lh"] Dec 02 10:44:55 crc kubenswrapper[4711]: I1202 10:44:55.097306 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="174870ec-da5f-4488-866c-3dcdcdddedf2" path="/var/lib/kubelet/pods/174870ec-da5f-4488-866c-3dcdcdddedf2/volumes" Dec 02 10:44:55 crc kubenswrapper[4711]: I1202 10:44:55.099735 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8df297e-2c28-4d8e-8c48-46bbaef36487" path="/var/lib/kubelet/pods/e8df297e-2c28-4d8e-8c48-46bbaef36487/volumes" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.175798 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx"] Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.178541 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.181145 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.181549 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.190869 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx"] Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.374962 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vgmp\" (UniqueName: \"kubernetes.io/projected/63b237a5-6020-484d-824d-f1463a3d864e-kube-api-access-9vgmp\") pod \"collect-profiles-29411205-4nkgx\" (UID: \"63b237a5-6020-484d-824d-f1463a3d864e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.375043 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63b237a5-6020-484d-824d-f1463a3d864e-config-volume\") pod \"collect-profiles-29411205-4nkgx\" (UID: \"63b237a5-6020-484d-824d-f1463a3d864e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.375067 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63b237a5-6020-484d-824d-f1463a3d864e-secret-volume\") pod \"collect-profiles-29411205-4nkgx\" (UID: \"63b237a5-6020-484d-824d-f1463a3d864e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.477492 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vgmp\" (UniqueName: \"kubernetes.io/projected/63b237a5-6020-484d-824d-f1463a3d864e-kube-api-access-9vgmp\") pod \"collect-profiles-29411205-4nkgx\" (UID: \"63b237a5-6020-484d-824d-f1463a3d864e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.477557 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63b237a5-6020-484d-824d-f1463a3d864e-config-volume\") pod \"collect-profiles-29411205-4nkgx\" (UID: \"63b237a5-6020-484d-824d-f1463a3d864e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.477579 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63b237a5-6020-484d-824d-f1463a3d864e-secret-volume\") pod \"collect-profiles-29411205-4nkgx\" (UID: \"63b237a5-6020-484d-824d-f1463a3d864e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.478885 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63b237a5-6020-484d-824d-f1463a3d864e-config-volume\") pod \"collect-profiles-29411205-4nkgx\" (UID: \"63b237a5-6020-484d-824d-f1463a3d864e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.486511 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63b237a5-6020-484d-824d-f1463a3d864e-secret-volume\") pod \"collect-profiles-29411205-4nkgx\" (UID: \"63b237a5-6020-484d-824d-f1463a3d864e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.495004 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vgmp\" (UniqueName: \"kubernetes.io/projected/63b237a5-6020-484d-824d-f1463a3d864e-kube-api-access-9vgmp\") pod \"collect-profiles-29411205-4nkgx\" (UID: \"63b237a5-6020-484d-824d-f1463a3d864e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.520203 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:00 crc kubenswrapper[4711]: I1202 10:45:00.991903 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx"] Dec 02 10:45:02 crc kubenswrapper[4711]: I1202 10:45:02.000592 4711 generic.go:334] "Generic (PLEG): container finished" podID="63b237a5-6020-484d-824d-f1463a3d864e" containerID="1908890b7e84b2696abd189c317445fcfbce2afa870a35e00a36792dd52882b5" exitCode=0 Dec 02 10:45:02 crc kubenswrapper[4711]: I1202 10:45:02.000834 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" event={"ID":"63b237a5-6020-484d-824d-f1463a3d864e","Type":"ContainerDied","Data":"1908890b7e84b2696abd189c317445fcfbce2afa870a35e00a36792dd52882b5"} Dec 02 10:45:02 crc kubenswrapper[4711]: I1202 10:45:02.001377 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" event={"ID":"63b237a5-6020-484d-824d-f1463a3d864e","Type":"ContainerStarted","Data":"9a73b788971d2b7ea562037044879856de3cc3cfbb17a0361250ca0f8a8362a4"} Dec 02 10:45:03 crc kubenswrapper[4711]: I1202 10:45:03.340752 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:03 crc kubenswrapper[4711]: I1202 10:45:03.536998 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63b237a5-6020-484d-824d-f1463a3d864e-config-volume\") pod \"63b237a5-6020-484d-824d-f1463a3d864e\" (UID: \"63b237a5-6020-484d-824d-f1463a3d864e\") " Dec 02 10:45:03 crc kubenswrapper[4711]: I1202 10:45:03.537088 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vgmp\" (UniqueName: \"kubernetes.io/projected/63b237a5-6020-484d-824d-f1463a3d864e-kube-api-access-9vgmp\") pod \"63b237a5-6020-484d-824d-f1463a3d864e\" (UID: \"63b237a5-6020-484d-824d-f1463a3d864e\") " Dec 02 10:45:03 crc kubenswrapper[4711]: I1202 10:45:03.537379 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63b237a5-6020-484d-824d-f1463a3d864e-secret-volume\") pod \"63b237a5-6020-484d-824d-f1463a3d864e\" (UID: \"63b237a5-6020-484d-824d-f1463a3d864e\") " Dec 02 10:45:03 crc kubenswrapper[4711]: I1202 10:45:03.537871 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63b237a5-6020-484d-824d-f1463a3d864e-config-volume" (OuterVolumeSpecName: "config-volume") pod "63b237a5-6020-484d-824d-f1463a3d864e" (UID: "63b237a5-6020-484d-824d-f1463a3d864e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:45:03 crc kubenswrapper[4711]: I1202 10:45:03.538285 4711 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63b237a5-6020-484d-824d-f1463a3d864e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:03 crc kubenswrapper[4711]: I1202 10:45:03.551778 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63b237a5-6020-484d-824d-f1463a3d864e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "63b237a5-6020-484d-824d-f1463a3d864e" (UID: "63b237a5-6020-484d-824d-f1463a3d864e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:45:03 crc kubenswrapper[4711]: I1202 10:45:03.551917 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63b237a5-6020-484d-824d-f1463a3d864e-kube-api-access-9vgmp" (OuterVolumeSpecName: "kube-api-access-9vgmp") pod "63b237a5-6020-484d-824d-f1463a3d864e" (UID: "63b237a5-6020-484d-824d-f1463a3d864e"). InnerVolumeSpecName "kube-api-access-9vgmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:45:03 crc kubenswrapper[4711]: I1202 10:45:03.640246 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vgmp\" (UniqueName: \"kubernetes.io/projected/63b237a5-6020-484d-824d-f1463a3d864e-kube-api-access-9vgmp\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:03 crc kubenswrapper[4711]: I1202 10:45:03.640301 4711 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63b237a5-6020-484d-824d-f1463a3d864e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:04 crc kubenswrapper[4711]: I1202 10:45:04.029301 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" event={"ID":"63b237a5-6020-484d-824d-f1463a3d864e","Type":"ContainerDied","Data":"9a73b788971d2b7ea562037044879856de3cc3cfbb17a0361250ca0f8a8362a4"} Dec 02 10:45:04 crc kubenswrapper[4711]: I1202 10:45:04.029596 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a73b788971d2b7ea562037044879856de3cc3cfbb17a0361250ca0f8a8362a4" Dec 02 10:45:04 crc kubenswrapper[4711]: I1202 10:45:04.029378 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-4nkgx" Dec 02 10:45:30 crc kubenswrapper[4711]: I1202 10:45:30.318440 4711 generic.go:334] "Generic (PLEG): container finished" podID="6cc0043c-689a-4c2f-b70f-a4a3c5344385" containerID="b0fc4de81267d738ed199bfd7db193bc6dc8a94c93deb357ec92686a850a732f" exitCode=0 Dec 02 10:45:30 crc kubenswrapper[4711]: I1202 10:45:30.318544 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" event={"ID":"6cc0043c-689a-4c2f-b70f-a4a3c5344385","Type":"ContainerDied","Data":"b0fc4de81267d738ed199bfd7db193bc6dc8a94c93deb357ec92686a850a732f"} Dec 02 10:45:31 crc kubenswrapper[4711]: I1202 10:45:31.760682 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:45:31 crc kubenswrapper[4711]: I1202 10:45:31.893196 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb2cz\" (UniqueName: \"kubernetes.io/projected/6cc0043c-689a-4c2f-b70f-a4a3c5344385-kube-api-access-zb2cz\") pod \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\" (UID: \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\") " Dec 02 10:45:31 crc kubenswrapper[4711]: I1202 10:45:31.893747 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cc0043c-689a-4c2f-b70f-a4a3c5344385-inventory\") pod \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\" (UID: \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\") " Dec 02 10:45:31 crc kubenswrapper[4711]: I1202 10:45:31.893837 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cc0043c-689a-4c2f-b70f-a4a3c5344385-ssh-key\") pod \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\" (UID: \"6cc0043c-689a-4c2f-b70f-a4a3c5344385\") " Dec 02 10:45:31 crc kubenswrapper[4711]: I1202 10:45:31.900871 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc0043c-689a-4c2f-b70f-a4a3c5344385-kube-api-access-zb2cz" (OuterVolumeSpecName: "kube-api-access-zb2cz") pod "6cc0043c-689a-4c2f-b70f-a4a3c5344385" (UID: "6cc0043c-689a-4c2f-b70f-a4a3c5344385"). InnerVolumeSpecName "kube-api-access-zb2cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:45:31 crc kubenswrapper[4711]: I1202 10:45:31.928131 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc0043c-689a-4c2f-b70f-a4a3c5344385-inventory" (OuterVolumeSpecName: "inventory") pod "6cc0043c-689a-4c2f-b70f-a4a3c5344385" (UID: "6cc0043c-689a-4c2f-b70f-a4a3c5344385"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:45:31 crc kubenswrapper[4711]: I1202 10:45:31.932541 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc0043c-689a-4c2f-b70f-a4a3c5344385-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6cc0043c-689a-4c2f-b70f-a4a3c5344385" (UID: "6cc0043c-689a-4c2f-b70f-a4a3c5344385"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:45:31 crc kubenswrapper[4711]: I1202 10:45:31.998405 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cc0043c-689a-4c2f-b70f-a4a3c5344385-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:31 crc kubenswrapper[4711]: I1202 10:45:31.998462 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cc0043c-689a-4c2f-b70f-a4a3c5344385-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:31 crc kubenswrapper[4711]: I1202 10:45:31.998482 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb2cz\" (UniqueName: \"kubernetes.io/projected/6cc0043c-689a-4c2f-b70f-a4a3c5344385-kube-api-access-zb2cz\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.340445 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" event={"ID":"6cc0043c-689a-4c2f-b70f-a4a3c5344385","Type":"ContainerDied","Data":"03fb4fc07c027f25c29fa409e0f97972e9c118f93ec86e71e597b927671f1645"} Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.340530 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.340546 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03fb4fc07c027f25c29fa409e0f97972e9c118f93ec86e71e597b927671f1645" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.459187 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t4djr"] Dec 02 10:45:32 crc kubenswrapper[4711]: E1202 10:45:32.459820 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc0043c-689a-4c2f-b70f-a4a3c5344385" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.459862 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc0043c-689a-4c2f-b70f-a4a3c5344385" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:45:32 crc kubenswrapper[4711]: E1202 10:45:32.459912 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b237a5-6020-484d-824d-f1463a3d864e" containerName="collect-profiles" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.459921 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b237a5-6020-484d-824d-f1463a3d864e" containerName="collect-profiles" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.460160 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc0043c-689a-4c2f-b70f-a4a3c5344385" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.460191 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="63b237a5-6020-484d-824d-f1463a3d864e" containerName="collect-profiles" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.461875 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.464686 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.465771 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.466097 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.466233 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.472104 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t4djr"] Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.606426 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-t4djr\" (UID: \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.606511 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-t4djr\" (UID: \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.607198 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phnx7\" (UniqueName: \"kubernetes.io/projected/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-kube-api-access-phnx7\") pod \"ssh-known-hosts-edpm-deployment-t4djr\" (UID: \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.709466 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-t4djr\" (UID: \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.710116 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-t4djr\" (UID: \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.710532 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phnx7\" (UniqueName: \"kubernetes.io/projected/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-kube-api-access-phnx7\") pod \"ssh-known-hosts-edpm-deployment-t4djr\" (UID: \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.715065 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-t4djr\" (UID: \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.721872 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-t4djr\" (UID: \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.740835 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phnx7\" (UniqueName: \"kubernetes.io/projected/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-kube-api-access-phnx7\") pod \"ssh-known-hosts-edpm-deployment-t4djr\" (UID: \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:32 crc kubenswrapper[4711]: I1202 10:45:32.802024 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:33 crc kubenswrapper[4711]: I1202 10:45:33.362197 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t4djr"] Dec 02 10:45:33 crc kubenswrapper[4711]: W1202 10:45:33.377070 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a03a125_6a0b_4e81_8df8_48e0085fa9a1.slice/crio-5c07fb68b8560cd2b552b1b2f45f0e3355ad644c75dcf4059b02bb7c64c6ca2d WatchSource:0}: Error finding container 5c07fb68b8560cd2b552b1b2f45f0e3355ad644c75dcf4059b02bb7c64c6ca2d: Status 404 returned error can't find the container with id 5c07fb68b8560cd2b552b1b2f45f0e3355ad644c75dcf4059b02bb7c64c6ca2d Dec 02 10:45:33 crc kubenswrapper[4711]: I1202 10:45:33.380698 4711 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:45:34 crc kubenswrapper[4711]: I1202 10:45:34.362587 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" event={"ID":"4a03a125-6a0b-4e81-8df8-48e0085fa9a1","Type":"ContainerStarted","Data":"03dd08b43e95c9d988b31f0d8b6a8c9568ea12b5cbcf2e3d9578b9dbe6aa4de8"} Dec 02 10:45:34 crc kubenswrapper[4711]: I1202 10:45:34.363257 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" event={"ID":"4a03a125-6a0b-4e81-8df8-48e0085fa9a1","Type":"ContainerStarted","Data":"5c07fb68b8560cd2b552b1b2f45f0e3355ad644c75dcf4059b02bb7c64c6ca2d"} Dec 02 10:45:34 crc kubenswrapper[4711]: I1202 10:45:34.388285 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" podStartSLOduration=1.724402225 podStartE2EDuration="2.388240191s" podCreationTimestamp="2025-12-02 10:45:32 +0000 UTC" firstStartedPulling="2025-12-02 10:45:33.380448729 +0000 UTC m=+1923.089815176" lastFinishedPulling="2025-12-02 10:45:34.044286695 +0000 UTC m=+1923.753653142" observedRunningTime="2025-12-02 10:45:34.379107445 +0000 UTC m=+1924.088473892" watchObservedRunningTime="2025-12-02 10:45:34.388240191 +0000 UTC m=+1924.097606648" Dec 02 10:45:36 crc kubenswrapper[4711]: I1202 10:45:36.364575 4711 scope.go:117] "RemoveContainer" containerID="b30e4cd33baa5dbbc18f184f2d383d80788c4e3be7e8e4ed3accd2f020181bfd" Dec 02 10:45:36 crc kubenswrapper[4711]: I1202 10:45:36.405350 4711 scope.go:117] "RemoveContainer" containerID="a95615ebf49ca8da7157680c43bcc06396b581988046b9c47f0a33aa88e234fa" Dec 02 10:45:39 crc kubenswrapper[4711]: I1202 10:45:39.054596 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sm4md"] Dec 02 10:45:39 crc kubenswrapper[4711]: I1202 10:45:39.070748 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sm4md"] Dec 02 10:45:39 crc kubenswrapper[4711]: I1202 10:45:39.098681 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3706775c-519a-4cf5-ad2c-7da1b55903dd" path="/var/lib/kubelet/pods/3706775c-519a-4cf5-ad2c-7da1b55903dd/volumes" Dec 02 10:45:41 crc kubenswrapper[4711]: I1202 10:45:41.440854 4711 generic.go:334] "Generic (PLEG): container finished" podID="4a03a125-6a0b-4e81-8df8-48e0085fa9a1" containerID="03dd08b43e95c9d988b31f0d8b6a8c9568ea12b5cbcf2e3d9578b9dbe6aa4de8" exitCode=0 Dec 02 10:45:41 crc kubenswrapper[4711]: I1202 10:45:41.440989 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" event={"ID":"4a03a125-6a0b-4e81-8df8-48e0085fa9a1","Type":"ContainerDied","Data":"03dd08b43e95c9d988b31f0d8b6a8c9568ea12b5cbcf2e3d9578b9dbe6aa4de8"} Dec 02 10:45:42 crc kubenswrapper[4711]: I1202 10:45:42.831611 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:42 crc kubenswrapper[4711]: I1202 10:45:42.933639 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-inventory-0\") pod \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\" (UID: \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\") " Dec 02 10:45:42 crc kubenswrapper[4711]: I1202 10:45:42.933834 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phnx7\" (UniqueName: \"kubernetes.io/projected/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-kube-api-access-phnx7\") pod \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\" (UID: \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\") " Dec 02 10:45:42 crc kubenswrapper[4711]: I1202 10:45:42.933969 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-ssh-key-openstack-edpm-ipam\") pod \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\" (UID: \"4a03a125-6a0b-4e81-8df8-48e0085fa9a1\") " Dec 02 10:45:42 crc kubenswrapper[4711]: I1202 10:45:42.948305 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-kube-api-access-phnx7" (OuterVolumeSpecName: "kube-api-access-phnx7") pod "4a03a125-6a0b-4e81-8df8-48e0085fa9a1" (UID: "4a03a125-6a0b-4e81-8df8-48e0085fa9a1"). InnerVolumeSpecName "kube-api-access-phnx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:45:42 crc kubenswrapper[4711]: I1202 10:45:42.967604 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "4a03a125-6a0b-4e81-8df8-48e0085fa9a1" (UID: "4a03a125-6a0b-4e81-8df8-48e0085fa9a1"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:45:42 crc kubenswrapper[4711]: I1202 10:45:42.967866 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4a03a125-6a0b-4e81-8df8-48e0085fa9a1" (UID: "4a03a125-6a0b-4e81-8df8-48e0085fa9a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.035676 4711 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.035712 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phnx7\" (UniqueName: \"kubernetes.io/projected/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-kube-api-access-phnx7\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.035725 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a03a125-6a0b-4e81-8df8-48e0085fa9a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.457382 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" event={"ID":"4a03a125-6a0b-4e81-8df8-48e0085fa9a1","Type":"ContainerDied","Data":"5c07fb68b8560cd2b552b1b2f45f0e3355ad644c75dcf4059b02bb7c64c6ca2d"} Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.457425 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c07fb68b8560cd2b552b1b2f45f0e3355ad644c75dcf4059b02bb7c64c6ca2d" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.457480 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t4djr" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.547149 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh"] Dec 02 10:45:43 crc kubenswrapper[4711]: E1202 10:45:43.548095 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a03a125-6a0b-4e81-8df8-48e0085fa9a1" containerName="ssh-known-hosts-edpm-deployment" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.548126 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a03a125-6a0b-4e81-8df8-48e0085fa9a1" containerName="ssh-known-hosts-edpm-deployment" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.548403 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a03a125-6a0b-4e81-8df8-48e0085fa9a1" containerName="ssh-known-hosts-edpm-deployment" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.549178 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.551852 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.552321 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.553215 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.553472 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.555525 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh"] Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.648399 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec850345-39cb-45c3-881d-aa6f59cf2c7a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-frrkh\" (UID: \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.648552 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec850345-39cb-45c3-881d-aa6f59cf2c7a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-frrkh\" (UID: \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.648594 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9hdj\" (UniqueName: \"kubernetes.io/projected/ec850345-39cb-45c3-881d-aa6f59cf2c7a-kube-api-access-g9hdj\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-frrkh\" (UID: \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.750824 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec850345-39cb-45c3-881d-aa6f59cf2c7a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-frrkh\" (UID: \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.751077 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec850345-39cb-45c3-881d-aa6f59cf2c7a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-frrkh\" (UID: \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.751131 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9hdj\" (UniqueName: \"kubernetes.io/projected/ec850345-39cb-45c3-881d-aa6f59cf2c7a-kube-api-access-g9hdj\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-frrkh\" (UID: \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.755177 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec850345-39cb-45c3-881d-aa6f59cf2c7a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-frrkh\" (UID: \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.763521 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec850345-39cb-45c3-881d-aa6f59cf2c7a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-frrkh\" (UID: \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.778134 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9hdj\" (UniqueName: \"kubernetes.io/projected/ec850345-39cb-45c3-881d-aa6f59cf2c7a-kube-api-access-g9hdj\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-frrkh\" (UID: \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:43 crc kubenswrapper[4711]: I1202 10:45:43.923812 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:44 crc kubenswrapper[4711]: I1202 10:45:44.501436 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh"] Dec 02 10:45:44 crc kubenswrapper[4711]: W1202 10:45:44.503004 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec850345_39cb_45c3_881d_aa6f59cf2c7a.slice/crio-dcd9f983e3fa90713d57dd4be2025a973bddf8223176cb0fb25f81af245ac0c9 WatchSource:0}: Error finding container dcd9f983e3fa90713d57dd4be2025a973bddf8223176cb0fb25f81af245ac0c9: Status 404 returned error can't find the container with id dcd9f983e3fa90713d57dd4be2025a973bddf8223176cb0fb25f81af245ac0c9 Dec 02 10:45:45 crc kubenswrapper[4711]: I1202 10:45:45.495837 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" event={"ID":"ec850345-39cb-45c3-881d-aa6f59cf2c7a","Type":"ContainerStarted","Data":"b890994b118ff3060a2b14695157541e137243fa93986fa1f3192b500ba441e4"} Dec 02 10:45:45 crc kubenswrapper[4711]: I1202 10:45:45.496252 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" event={"ID":"ec850345-39cb-45c3-881d-aa6f59cf2c7a","Type":"ContainerStarted","Data":"dcd9f983e3fa90713d57dd4be2025a973bddf8223176cb0fb25f81af245ac0c9"} Dec 02 10:45:45 crc kubenswrapper[4711]: I1202 10:45:45.523867 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" podStartSLOduration=2.03179122 podStartE2EDuration="2.523832807s" podCreationTimestamp="2025-12-02 10:45:43 +0000 UTC" firstStartedPulling="2025-12-02 10:45:44.505583094 +0000 UTC m=+1934.214949551" lastFinishedPulling="2025-12-02 10:45:44.997624671 +0000 UTC m=+1934.706991138" observedRunningTime="2025-12-02 10:45:45.519197942 +0000 UTC m=+1935.228564439" watchObservedRunningTime="2025-12-02 10:45:45.523832807 +0000 UTC m=+1935.233199294" Dec 02 10:45:53 crc kubenswrapper[4711]: I1202 10:45:53.592065 4711 generic.go:334] "Generic (PLEG): container finished" podID="ec850345-39cb-45c3-881d-aa6f59cf2c7a" containerID="b890994b118ff3060a2b14695157541e137243fa93986fa1f3192b500ba441e4" exitCode=0 Dec 02 10:45:53 crc kubenswrapper[4711]: I1202 10:45:53.592149 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" event={"ID":"ec850345-39cb-45c3-881d-aa6f59cf2c7a","Type":"ContainerDied","Data":"b890994b118ff3060a2b14695157541e137243fa93986fa1f3192b500ba441e4"} Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.004186 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.103571 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec850345-39cb-45c3-881d-aa6f59cf2c7a-inventory\") pod \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\" (UID: \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\") " Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.103638 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec850345-39cb-45c3-881d-aa6f59cf2c7a-ssh-key\") pod \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\" (UID: \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\") " Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.103756 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9hdj\" (UniqueName: \"kubernetes.io/projected/ec850345-39cb-45c3-881d-aa6f59cf2c7a-kube-api-access-g9hdj\") pod \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\" (UID: \"ec850345-39cb-45c3-881d-aa6f59cf2c7a\") " Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.113108 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec850345-39cb-45c3-881d-aa6f59cf2c7a-kube-api-access-g9hdj" (OuterVolumeSpecName: "kube-api-access-g9hdj") pod "ec850345-39cb-45c3-881d-aa6f59cf2c7a" (UID: "ec850345-39cb-45c3-881d-aa6f59cf2c7a"). InnerVolumeSpecName "kube-api-access-g9hdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.138763 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec850345-39cb-45c3-881d-aa6f59cf2c7a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ec850345-39cb-45c3-881d-aa6f59cf2c7a" (UID: "ec850345-39cb-45c3-881d-aa6f59cf2c7a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.160835 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec850345-39cb-45c3-881d-aa6f59cf2c7a-inventory" (OuterVolumeSpecName: "inventory") pod "ec850345-39cb-45c3-881d-aa6f59cf2c7a" (UID: "ec850345-39cb-45c3-881d-aa6f59cf2c7a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.206075 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec850345-39cb-45c3-881d-aa6f59cf2c7a-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.206109 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec850345-39cb-45c3-881d-aa6f59cf2c7a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.206118 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9hdj\" (UniqueName: \"kubernetes.io/projected/ec850345-39cb-45c3-881d-aa6f59cf2c7a-kube-api-access-g9hdj\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.622733 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" event={"ID":"ec850345-39cb-45c3-881d-aa6f59cf2c7a","Type":"ContainerDied","Data":"dcd9f983e3fa90713d57dd4be2025a973bddf8223176cb0fb25f81af245ac0c9"} Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.623225 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcd9f983e3fa90713d57dd4be2025a973bddf8223176cb0fb25f81af245ac0c9" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.622829 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-frrkh" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.743277 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz"] Dec 02 10:45:55 crc kubenswrapper[4711]: E1202 10:45:55.743670 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec850345-39cb-45c3-881d-aa6f59cf2c7a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.743693 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec850345-39cb-45c3-881d-aa6f59cf2c7a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.743939 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec850345-39cb-45c3-881d-aa6f59cf2c7a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.744750 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.748361 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.748489 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.748867 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.749013 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.760877 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz"] Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.818862 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ff31470-e780-4e6a-850a-6cada5050225-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz\" (UID: \"5ff31470-e780-4e6a-850a-6cada5050225\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.819232 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff31470-e780-4e6a-850a-6cada5050225-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz\" (UID: \"5ff31470-e780-4e6a-850a-6cada5050225\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.819310 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnhhc\" (UniqueName: \"kubernetes.io/projected/5ff31470-e780-4e6a-850a-6cada5050225-kube-api-access-lnhhc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz\" (UID: \"5ff31470-e780-4e6a-850a-6cada5050225\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.921031 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff31470-e780-4e6a-850a-6cada5050225-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz\" (UID: \"5ff31470-e780-4e6a-850a-6cada5050225\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.921145 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnhhc\" (UniqueName: \"kubernetes.io/projected/5ff31470-e780-4e6a-850a-6cada5050225-kube-api-access-lnhhc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz\" (UID: \"5ff31470-e780-4e6a-850a-6cada5050225\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.921335 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ff31470-e780-4e6a-850a-6cada5050225-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz\" (UID: \"5ff31470-e780-4e6a-850a-6cada5050225\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.926070 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff31470-e780-4e6a-850a-6cada5050225-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz\" (UID: \"5ff31470-e780-4e6a-850a-6cada5050225\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.929362 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ff31470-e780-4e6a-850a-6cada5050225-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz\" (UID: \"5ff31470-e780-4e6a-850a-6cada5050225\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:45:55 crc kubenswrapper[4711]: I1202 10:45:55.950432 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnhhc\" (UniqueName: \"kubernetes.io/projected/5ff31470-e780-4e6a-850a-6cada5050225-kube-api-access-lnhhc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz\" (UID: \"5ff31470-e780-4e6a-850a-6cada5050225\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:45:56 crc kubenswrapper[4711]: I1202 10:45:56.069057 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:45:56 crc kubenswrapper[4711]: I1202 10:45:56.596710 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz"] Dec 02 10:45:56 crc kubenswrapper[4711]: I1202 10:45:56.644167 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" event={"ID":"5ff31470-e780-4e6a-850a-6cada5050225","Type":"ContainerStarted","Data":"c864f43c3da8c4869fe84514d308bdf9765ef04e44596dda3ffaa2c355b8e4d5"} Dec 02 10:45:57 crc kubenswrapper[4711]: I1202 10:45:57.656030 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" event={"ID":"5ff31470-e780-4e6a-850a-6cada5050225","Type":"ContainerStarted","Data":"b293906e4ef987de4d60e489cffd893e325332b5c4386109d222f292b4fcf106"} Dec 02 10:45:57 crc kubenswrapper[4711]: I1202 10:45:57.680002 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" podStartSLOduration=2.201701543 podStartE2EDuration="2.679986399s" podCreationTimestamp="2025-12-02 10:45:55 +0000 UTC" firstStartedPulling="2025-12-02 10:45:56.598495342 +0000 UTC m=+1946.307861829" lastFinishedPulling="2025-12-02 10:45:57.076780238 +0000 UTC m=+1946.786146685" observedRunningTime="2025-12-02 10:45:57.677005159 +0000 UTC m=+1947.386371606" watchObservedRunningTime="2025-12-02 10:45:57.679986399 +0000 UTC m=+1947.389352846" Dec 02 10:46:07 crc kubenswrapper[4711]: I1202 10:46:07.805670 4711 generic.go:334] "Generic (PLEG): container finished" podID="5ff31470-e780-4e6a-850a-6cada5050225" containerID="b293906e4ef987de4d60e489cffd893e325332b5c4386109d222f292b4fcf106" exitCode=0 Dec 02 10:46:07 crc kubenswrapper[4711]: I1202 10:46:07.805721 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" event={"ID":"5ff31470-e780-4e6a-850a-6cada5050225","Type":"ContainerDied","Data":"b293906e4ef987de4d60e489cffd893e325332b5c4386109d222f292b4fcf106"} Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.253423 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.412542 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff31470-e780-4e6a-850a-6cada5050225-inventory\") pod \"5ff31470-e780-4e6a-850a-6cada5050225\" (UID: \"5ff31470-e780-4e6a-850a-6cada5050225\") " Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.412615 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ff31470-e780-4e6a-850a-6cada5050225-ssh-key\") pod \"5ff31470-e780-4e6a-850a-6cada5050225\" (UID: \"5ff31470-e780-4e6a-850a-6cada5050225\") " Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.412839 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnhhc\" (UniqueName: \"kubernetes.io/projected/5ff31470-e780-4e6a-850a-6cada5050225-kube-api-access-lnhhc\") pod \"5ff31470-e780-4e6a-850a-6cada5050225\" (UID: \"5ff31470-e780-4e6a-850a-6cada5050225\") " Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.418606 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff31470-e780-4e6a-850a-6cada5050225-kube-api-access-lnhhc" (OuterVolumeSpecName: "kube-api-access-lnhhc") pod "5ff31470-e780-4e6a-850a-6cada5050225" (UID: "5ff31470-e780-4e6a-850a-6cada5050225"). InnerVolumeSpecName "kube-api-access-lnhhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.439374 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff31470-e780-4e6a-850a-6cada5050225-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5ff31470-e780-4e6a-850a-6cada5050225" (UID: "5ff31470-e780-4e6a-850a-6cada5050225"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.444349 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff31470-e780-4e6a-850a-6cada5050225-inventory" (OuterVolumeSpecName: "inventory") pod "5ff31470-e780-4e6a-850a-6cada5050225" (UID: "5ff31470-e780-4e6a-850a-6cada5050225"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.514338 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff31470-e780-4e6a-850a-6cada5050225-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.514372 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ff31470-e780-4e6a-850a-6cada5050225-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.514381 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnhhc\" (UniqueName: \"kubernetes.io/projected/5ff31470-e780-4e6a-850a-6cada5050225-kube-api-access-lnhhc\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.836001 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" event={"ID":"5ff31470-e780-4e6a-850a-6cada5050225","Type":"ContainerDied","Data":"c864f43c3da8c4869fe84514d308bdf9765ef04e44596dda3ffaa2c355b8e4d5"} Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.836084 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c864f43c3da8c4869fe84514d308bdf9765ef04e44596dda3ffaa2c355b8e4d5" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.836267 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.936337 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb"] Dec 02 10:46:09 crc kubenswrapper[4711]: E1202 10:46:09.936816 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff31470-e780-4e6a-850a-6cada5050225" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.936842 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff31470-e780-4e6a-850a-6cada5050225" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.937120 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff31470-e780-4e6a-850a-6cada5050225" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.938223 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.943803 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.944235 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.944311 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.944500 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.944788 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.945487 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb"] Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.945820 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.945867 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:46:09 crc kubenswrapper[4711]: I1202 10:46:09.945892 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.025942 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.025996 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.026035 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.026053 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.026075 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.026198 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjsnq\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-kube-api-access-kjsnq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.026274 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.026302 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.026361 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.026405 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.026424 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.026636 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.026690 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.026928 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128331 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128376 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128422 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128441 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128462 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128486 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjsnq\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-kube-api-access-kjsnq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128509 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128530 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128558 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128581 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128597 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128638 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128656 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.128725 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.134830 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.134936 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.135233 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.135889 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.136823 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.137072 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.137132 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.137739 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.138108 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.138612 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.138764 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.139593 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.144545 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.157519 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjsnq\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-kube-api-access-kjsnq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.265136 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:10 crc kubenswrapper[4711]: I1202 10:46:10.900786 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb"] Dec 02 10:46:11 crc kubenswrapper[4711]: I1202 10:46:11.858840 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" event={"ID":"d8270f0b-6b4c-4682-bf69-09147b922785","Type":"ContainerStarted","Data":"32717ff93fb32ab2bf96b770399cbb4d9cdaf5cd483aa80581416c790d2c3c5e"} Dec 02 10:46:12 crc kubenswrapper[4711]: I1202 10:46:12.873324 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" event={"ID":"d8270f0b-6b4c-4682-bf69-09147b922785","Type":"ContainerStarted","Data":"220066df13610fbf2cf31e8940d60a7a8012e76d276313e171c5931e792ccd8b"} Dec 02 10:46:12 crc kubenswrapper[4711]: I1202 10:46:12.908582 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" podStartSLOduration=3.228506864 podStartE2EDuration="3.908555506s" podCreationTimestamp="2025-12-02 10:46:09 +0000 UTC" firstStartedPulling="2025-12-02 10:46:10.921179273 +0000 UTC m=+1960.630545740" lastFinishedPulling="2025-12-02 10:46:11.601227885 +0000 UTC m=+1961.310594382" observedRunningTime="2025-12-02 10:46:12.895486084 +0000 UTC m=+1962.604852541" watchObservedRunningTime="2025-12-02 10:46:12.908555506 +0000 UTC m=+1962.617921963" Dec 02 10:46:16 crc kubenswrapper[4711]: I1202 10:46:16.993614 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fwswr"] Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.000996 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.021192 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fwswr"] Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.082668 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-utilities\") pod \"redhat-operators-fwswr\" (UID: \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\") " pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.082761 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-catalog-content\") pod \"redhat-operators-fwswr\" (UID: \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\") " pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.082825 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55r9l\" (UniqueName: \"kubernetes.io/projected/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-kube-api-access-55r9l\") pod \"redhat-operators-fwswr\" (UID: \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\") " pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.185074 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-utilities\") pod \"redhat-operators-fwswr\" (UID: \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\") " pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.185132 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-catalog-content\") pod \"redhat-operators-fwswr\" (UID: \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\") " pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.185179 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55r9l\" (UniqueName: \"kubernetes.io/projected/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-kube-api-access-55r9l\") pod \"redhat-operators-fwswr\" (UID: \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\") " pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.185984 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-utilities\") pod \"redhat-operators-fwswr\" (UID: \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\") " pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.186212 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-catalog-content\") pod \"redhat-operators-fwswr\" (UID: \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\") " pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.212740 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55r9l\" (UniqueName: \"kubernetes.io/projected/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-kube-api-access-55r9l\") pod \"redhat-operators-fwswr\" (UID: \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\") " pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.334886 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.588738 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fwswr"] Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.927167 4711 generic.go:334] "Generic (PLEG): container finished" podID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" containerID="988f57b5340060ac0e750095cc646a87e34451f7d006f0651b1c0b614533aefe" exitCode=0 Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.927272 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwswr" event={"ID":"e3f16a5a-02e6-4a27-ada8-ac892bffeba7","Type":"ContainerDied","Data":"988f57b5340060ac0e750095cc646a87e34451f7d006f0651b1c0b614533aefe"} Dec 02 10:46:17 crc kubenswrapper[4711]: I1202 10:46:17.927431 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwswr" event={"ID":"e3f16a5a-02e6-4a27-ada8-ac892bffeba7","Type":"ContainerStarted","Data":"0e2ca24331db5e998c75559f3b4543dd3251d684e6f017b6358553bdc5f4e58c"} Dec 02 10:46:19 crc kubenswrapper[4711]: E1202 10:46:19.770162 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f16a5a_02e6_4a27_ada8_ac892bffeba7.slice/crio-00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c.scope\": RecentStats: unable to find data in memory cache]" Dec 02 10:46:19 crc kubenswrapper[4711]: I1202 10:46:19.956880 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwswr" event={"ID":"e3f16a5a-02e6-4a27-ada8-ac892bffeba7","Type":"ContainerStarted","Data":"00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c"} Dec 02 10:46:21 crc kubenswrapper[4711]: I1202 10:46:21.981602 4711 generic.go:334] "Generic (PLEG): container finished" podID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" containerID="00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c" exitCode=0 Dec 02 10:46:21 crc kubenswrapper[4711]: I1202 10:46:21.981687 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwswr" event={"ID":"e3f16a5a-02e6-4a27-ada8-ac892bffeba7","Type":"ContainerDied","Data":"00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c"} Dec 02 10:46:22 crc kubenswrapper[4711]: I1202 10:46:22.993527 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwswr" event={"ID":"e3f16a5a-02e6-4a27-ada8-ac892bffeba7","Type":"ContainerStarted","Data":"e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94"} Dec 02 10:46:23 crc kubenswrapper[4711]: I1202 10:46:23.020894 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fwswr" podStartSLOduration=2.48235499 podStartE2EDuration="7.020872084s" podCreationTimestamp="2025-12-02 10:46:16 +0000 UTC" firstStartedPulling="2025-12-02 10:46:17.928868269 +0000 UTC m=+1967.638234706" lastFinishedPulling="2025-12-02 10:46:22.467385353 +0000 UTC m=+1972.176751800" observedRunningTime="2025-12-02 10:46:23.015399516 +0000 UTC m=+1972.724765983" watchObservedRunningTime="2025-12-02 10:46:23.020872084 +0000 UTC m=+1972.730238551" Dec 02 10:46:27 crc kubenswrapper[4711]: I1202 10:46:27.336065 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:27 crc kubenswrapper[4711]: I1202 10:46:27.336682 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:28 crc kubenswrapper[4711]: I1202 10:46:28.381793 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwswr" podUID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" containerName="registry-server" probeResult="failure" output=< Dec 02 10:46:28 crc kubenswrapper[4711]: timeout: failed to connect service ":50051" within 1s Dec 02 10:46:28 crc kubenswrapper[4711]: > Dec 02 10:46:36 crc kubenswrapper[4711]: I1202 10:46:36.538171 4711 scope.go:117] "RemoveContainer" containerID="d290cd33687e5ad39858c3607f11bd8aca3f3f56b2f80318903c0a30926618d8" Dec 02 10:46:37 crc kubenswrapper[4711]: I1202 10:46:37.404562 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:37 crc kubenswrapper[4711]: I1202 10:46:37.490518 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:37 crc kubenswrapper[4711]: I1202 10:46:37.657381 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fwswr"] Dec 02 10:46:39 crc kubenswrapper[4711]: I1202 10:46:39.149196 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fwswr" podUID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" containerName="registry-server" containerID="cri-o://e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94" gracePeriod=2 Dec 02 10:46:39 crc kubenswrapper[4711]: I1202 10:46:39.615759 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:39 crc kubenswrapper[4711]: I1202 10:46:39.641575 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55r9l\" (UniqueName: \"kubernetes.io/projected/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-kube-api-access-55r9l\") pod \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\" (UID: \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\") " Dec 02 10:46:39 crc kubenswrapper[4711]: I1202 10:46:39.641860 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-utilities\") pod \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\" (UID: \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\") " Dec 02 10:46:39 crc kubenswrapper[4711]: I1202 10:46:39.642006 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-catalog-content\") pod \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\" (UID: \"e3f16a5a-02e6-4a27-ada8-ac892bffeba7\") " Dec 02 10:46:39 crc kubenswrapper[4711]: I1202 10:46:39.644006 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-utilities" (OuterVolumeSpecName: "utilities") pod "e3f16a5a-02e6-4a27-ada8-ac892bffeba7" (UID: "e3f16a5a-02e6-4a27-ada8-ac892bffeba7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:46:39 crc kubenswrapper[4711]: I1202 10:46:39.659083 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-kube-api-access-55r9l" (OuterVolumeSpecName: "kube-api-access-55r9l") pod "e3f16a5a-02e6-4a27-ada8-ac892bffeba7" (UID: "e3f16a5a-02e6-4a27-ada8-ac892bffeba7"). InnerVolumeSpecName "kube-api-access-55r9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:46:39 crc kubenswrapper[4711]: I1202 10:46:39.744577 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:39 crc kubenswrapper[4711]: I1202 10:46:39.744621 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55r9l\" (UniqueName: \"kubernetes.io/projected/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-kube-api-access-55r9l\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:39 crc kubenswrapper[4711]: I1202 10:46:39.754234 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3f16a5a-02e6-4a27-ada8-ac892bffeba7" (UID: "e3f16a5a-02e6-4a27-ada8-ac892bffeba7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:46:39 crc kubenswrapper[4711]: I1202 10:46:39.845987 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3f16a5a-02e6-4a27-ada8-ac892bffeba7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.161629 4711 generic.go:334] "Generic (PLEG): container finished" podID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" containerID="e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94" exitCode=0 Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.161691 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwswr" event={"ID":"e3f16a5a-02e6-4a27-ada8-ac892bffeba7","Type":"ContainerDied","Data":"e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94"} Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.161720 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwswr" event={"ID":"e3f16a5a-02e6-4a27-ada8-ac892bffeba7","Type":"ContainerDied","Data":"0e2ca24331db5e998c75559f3b4543dd3251d684e6f017b6358553bdc5f4e58c"} Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.161739 4711 scope.go:117] "RemoveContainer" containerID="e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94" Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.161876 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwswr" Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.193596 4711 scope.go:117] "RemoveContainer" containerID="00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c" Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.207790 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fwswr"] Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.223384 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fwswr"] Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.252057 4711 scope.go:117] "RemoveContainer" containerID="988f57b5340060ac0e750095cc646a87e34451f7d006f0651b1c0b614533aefe" Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.278612 4711 scope.go:117] "RemoveContainer" containerID="e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94" Dec 02 10:46:40 crc kubenswrapper[4711]: E1202 10:46:40.280229 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94\": container with ID starting with e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94 not found: ID does not exist" containerID="e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94" Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.280284 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94"} err="failed to get container status \"e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94\": rpc error: code = NotFound desc = could not find container \"e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94\": container with ID starting with e13d95c0d7cbec831e22911e98dbb8be4632b107633465b5a1d90ab07d513b94 not found: ID does not exist" Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.280313 4711 scope.go:117] "RemoveContainer" containerID="00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c" Dec 02 10:46:40 crc kubenswrapper[4711]: E1202 10:46:40.280700 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c\": container with ID starting with 00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c not found: ID does not exist" containerID="00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c" Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.280745 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c"} err="failed to get container status \"00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c\": rpc error: code = NotFound desc = could not find container \"00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c\": container with ID starting with 00d1d424b680c722e0b45c0ac9a6f502e33cfbab07e799dd6389d6dd019b939c not found: ID does not exist" Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.280772 4711 scope.go:117] "RemoveContainer" containerID="988f57b5340060ac0e750095cc646a87e34451f7d006f0651b1c0b614533aefe" Dec 02 10:46:40 crc kubenswrapper[4711]: E1202 10:46:40.281065 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988f57b5340060ac0e750095cc646a87e34451f7d006f0651b1c0b614533aefe\": container with ID starting with 988f57b5340060ac0e750095cc646a87e34451f7d006f0651b1c0b614533aefe not found: ID does not exist" containerID="988f57b5340060ac0e750095cc646a87e34451f7d006f0651b1c0b614533aefe" Dec 02 10:46:40 crc kubenswrapper[4711]: I1202 10:46:40.281096 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988f57b5340060ac0e750095cc646a87e34451f7d006f0651b1c0b614533aefe"} err="failed to get container status \"988f57b5340060ac0e750095cc646a87e34451f7d006f0651b1c0b614533aefe\": rpc error: code = NotFound desc = could not find container \"988f57b5340060ac0e750095cc646a87e34451f7d006f0651b1c0b614533aefe\": container with ID starting with 988f57b5340060ac0e750095cc646a87e34451f7d006f0651b1c0b614533aefe not found: ID does not exist" Dec 02 10:46:40 crc kubenswrapper[4711]: E1202 10:46:40.333036 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f16a5a_02e6_4a27_ada8_ac892bffeba7.slice/crio-0e2ca24331db5e998c75559f3b4543dd3251d684e6f017b6358553bdc5f4e58c\": RecentStats: unable to find data in memory cache]" Dec 02 10:46:41 crc kubenswrapper[4711]: I1202 10:46:41.088868 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" path="/var/lib/kubelet/pods/e3f16a5a-02e6-4a27-ada8-ac892bffeba7/volumes" Dec 02 10:46:49 crc kubenswrapper[4711]: I1202 10:46:49.259261 4711 generic.go:334] "Generic (PLEG): container finished" podID="d8270f0b-6b4c-4682-bf69-09147b922785" containerID="220066df13610fbf2cf31e8940d60a7a8012e76d276313e171c5931e792ccd8b" exitCode=0 Dec 02 10:46:49 crc kubenswrapper[4711]: I1202 10:46:49.259372 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" event={"ID":"d8270f0b-6b4c-4682-bf69-09147b922785","Type":"ContainerDied","Data":"220066df13610fbf2cf31e8940d60a7a8012e76d276313e171c5931e792ccd8b"} Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.720656 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.773978 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-bootstrap-combined-ca-bundle\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.774030 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-ovn-combined-ca-bundle\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.774095 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.775014 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-libvirt-combined-ca-bundle\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.775042 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjsnq\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-kube-api-access-kjsnq\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.775089 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-ssh-key\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.775112 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.775407 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.775455 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-repo-setup-combined-ca-bundle\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.775475 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-neutron-metadata-combined-ca-bundle\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.775515 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.775545 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-inventory\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.775586 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-nova-combined-ca-bundle\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.775604 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-telemetry-combined-ca-bundle\") pod \"d8270f0b-6b4c-4682-bf69-09147b922785\" (UID: \"d8270f0b-6b4c-4682-bf69-09147b922785\") " Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.780683 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.781476 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.781924 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.782608 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.782722 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.783248 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-kube-api-access-kjsnq" (OuterVolumeSpecName: "kube-api-access-kjsnq") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "kube-api-access-kjsnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.783614 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.783655 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.784186 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.784551 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.785109 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.797513 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.806988 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-inventory" (OuterVolumeSpecName: "inventory") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.807487 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d8270f0b-6b4c-4682-bf69-09147b922785" (UID: "d8270f0b-6b4c-4682-bf69-09147b922785"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.879086 4711 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.879388 4711 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.879407 4711 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.879448 4711 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.879466 4711 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.879478 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.879489 4711 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.879801 4711 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.879822 4711 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.879928 4711 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.879942 4711 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.879991 4711 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.880005 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjsnq\" (UniqueName: \"kubernetes.io/projected/d8270f0b-6b4c-4682-bf69-09147b922785-kube-api-access-kjsnq\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:50 crc kubenswrapper[4711]: I1202 10:46:50.880017 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8270f0b-6b4c-4682-bf69-09147b922785-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.284483 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" event={"ID":"d8270f0b-6b4c-4682-bf69-09147b922785","Type":"ContainerDied","Data":"32717ff93fb32ab2bf96b770399cbb4d9cdaf5cd483aa80581416c790d2c3c5e"} Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.284563 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32717ff93fb32ab2bf96b770399cbb4d9cdaf5cd483aa80581416c790d2c3c5e" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.284666 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.423648 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7"] Dec 02 10:46:51 crc kubenswrapper[4711]: E1202 10:46:51.424223 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8270f0b-6b4c-4682-bf69-09147b922785" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.424264 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8270f0b-6b4c-4682-bf69-09147b922785" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 02 10:46:51 crc kubenswrapper[4711]: E1202 10:46:51.424297 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" containerName="extract-content" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.424307 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" containerName="extract-content" Dec 02 10:46:51 crc kubenswrapper[4711]: E1202 10:46:51.424343 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" containerName="extract-utilities" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.424353 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" containerName="extract-utilities" Dec 02 10:46:51 crc kubenswrapper[4711]: E1202 10:46:51.424371 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" containerName="registry-server" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.424379 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" containerName="registry-server" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.424638 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8270f0b-6b4c-4682-bf69-09147b922785" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.424697 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f16a5a-02e6-4a27-ada8-ac892bffeba7" containerName="registry-server" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.425592 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.432235 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.432512 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.432758 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.433083 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.435246 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.436918 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7"] Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.490575 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.490680 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.490718 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e572720a-5f65-485f-ad5b-76d5f7a782ac-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.490806 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpzb7\" (UniqueName: \"kubernetes.io/projected/e572720a-5f65-485f-ad5b-76d5f7a782ac-kube-api-access-cpzb7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.490831 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.595163 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e572720a-5f65-485f-ad5b-76d5f7a782ac-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.595425 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpzb7\" (UniqueName: \"kubernetes.io/projected/e572720a-5f65-485f-ad5b-76d5f7a782ac-kube-api-access-cpzb7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.595479 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.595609 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.595844 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.601282 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.602794 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e572720a-5f65-485f-ad5b-76d5f7a782ac-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.611828 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.614939 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.630167 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpzb7\" (UniqueName: \"kubernetes.io/projected/e572720a-5f65-485f-ad5b-76d5f7a782ac-kube-api-access-cpzb7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9hrn7\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:51 crc kubenswrapper[4711]: I1202 10:46:51.755151 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:46:52 crc kubenswrapper[4711]: I1202 10:46:52.325637 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7"] Dec 02 10:46:52 crc kubenswrapper[4711]: I1202 10:46:52.586008 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:46:52 crc kubenswrapper[4711]: I1202 10:46:52.586421 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:46:53 crc kubenswrapper[4711]: I1202 10:46:53.301470 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" event={"ID":"e572720a-5f65-485f-ad5b-76d5f7a782ac","Type":"ContainerStarted","Data":"cc1a879cefa6c0250130e4346153d6772dadc8e7c9d0008872a9ccad15471e91"} Dec 02 10:46:53 crc kubenswrapper[4711]: I1202 10:46:53.301915 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" event={"ID":"e572720a-5f65-485f-ad5b-76d5f7a782ac","Type":"ContainerStarted","Data":"19022ae003b435e9f73bc8b69be43d6dae18e0ded234b18b64ef0cbc0cfaaaae"} Dec 02 10:46:53 crc kubenswrapper[4711]: I1202 10:46:53.328616 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" podStartSLOduration=1.808909581 podStartE2EDuration="2.328550251s" podCreationTimestamp="2025-12-02 10:46:51 +0000 UTC" firstStartedPulling="2025-12-02 10:46:52.330743479 +0000 UTC m=+2002.040109916" lastFinishedPulling="2025-12-02 10:46:52.850384109 +0000 UTC m=+2002.559750586" observedRunningTime="2025-12-02 10:46:53.318619973 +0000 UTC m=+2003.027986420" watchObservedRunningTime="2025-12-02 10:46:53.328550251 +0000 UTC m=+2003.037916698" Dec 02 10:47:22 crc kubenswrapper[4711]: I1202 10:47:22.585729 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:47:22 crc kubenswrapper[4711]: I1202 10:47:22.586264 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:47:49 crc kubenswrapper[4711]: I1202 10:47:49.924417 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qmdv4"] Dec 02 10:47:49 crc kubenswrapper[4711]: I1202 10:47:49.927680 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:47:49 crc kubenswrapper[4711]: I1202 10:47:49.935607 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qmdv4"] Dec 02 10:47:50 crc kubenswrapper[4711]: I1202 10:47:50.038183 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gchjd\" (UniqueName: \"kubernetes.io/projected/40712fa7-bc4e-4062-b466-f8fc0af28d39-kube-api-access-gchjd\") pod \"community-operators-qmdv4\" (UID: \"40712fa7-bc4e-4062-b466-f8fc0af28d39\") " pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:47:50 crc kubenswrapper[4711]: I1202 10:47:50.038310 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40712fa7-bc4e-4062-b466-f8fc0af28d39-catalog-content\") pod \"community-operators-qmdv4\" (UID: \"40712fa7-bc4e-4062-b466-f8fc0af28d39\") " pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:47:50 crc kubenswrapper[4711]: I1202 10:47:50.038499 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40712fa7-bc4e-4062-b466-f8fc0af28d39-utilities\") pod \"community-operators-qmdv4\" (UID: \"40712fa7-bc4e-4062-b466-f8fc0af28d39\") " pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:47:50 crc kubenswrapper[4711]: I1202 10:47:50.140077 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchjd\" (UniqueName: \"kubernetes.io/projected/40712fa7-bc4e-4062-b466-f8fc0af28d39-kube-api-access-gchjd\") pod \"community-operators-qmdv4\" (UID: \"40712fa7-bc4e-4062-b466-f8fc0af28d39\") " pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:47:50 crc kubenswrapper[4711]: I1202 10:47:50.140133 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40712fa7-bc4e-4062-b466-f8fc0af28d39-catalog-content\") pod \"community-operators-qmdv4\" (UID: \"40712fa7-bc4e-4062-b466-f8fc0af28d39\") " pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:47:50 crc kubenswrapper[4711]: I1202 10:47:50.140213 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40712fa7-bc4e-4062-b466-f8fc0af28d39-utilities\") pod \"community-operators-qmdv4\" (UID: \"40712fa7-bc4e-4062-b466-f8fc0af28d39\") " pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:47:50 crc kubenswrapper[4711]: I1202 10:47:50.140740 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40712fa7-bc4e-4062-b466-f8fc0af28d39-utilities\") pod \"community-operators-qmdv4\" (UID: \"40712fa7-bc4e-4062-b466-f8fc0af28d39\") " pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:47:50 crc kubenswrapper[4711]: I1202 10:47:50.140948 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40712fa7-bc4e-4062-b466-f8fc0af28d39-catalog-content\") pod \"community-operators-qmdv4\" (UID: \"40712fa7-bc4e-4062-b466-f8fc0af28d39\") " pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:47:50 crc kubenswrapper[4711]: I1202 10:47:50.166659 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gchjd\" (UniqueName: \"kubernetes.io/projected/40712fa7-bc4e-4062-b466-f8fc0af28d39-kube-api-access-gchjd\") pod \"community-operators-qmdv4\" (UID: \"40712fa7-bc4e-4062-b466-f8fc0af28d39\") " pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:47:50 crc kubenswrapper[4711]: I1202 10:47:50.264377 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:47:50 crc kubenswrapper[4711]: I1202 10:47:50.778784 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qmdv4"] Dec 02 10:47:50 crc kubenswrapper[4711]: I1202 10:47:50.909744 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmdv4" event={"ID":"40712fa7-bc4e-4062-b466-f8fc0af28d39","Type":"ContainerStarted","Data":"2f270eaaf32214082bae3e9829676af32466d6fc30d67ded61c173ced2ebe6ee"} Dec 02 10:47:51 crc kubenswrapper[4711]: I1202 10:47:51.924763 4711 generic.go:334] "Generic (PLEG): container finished" podID="40712fa7-bc4e-4062-b466-f8fc0af28d39" containerID="0d1ba9db9be63ff81feb597f1bafdb9da81c814141564bcb184d46d8991c1dad" exitCode=0 Dec 02 10:47:51 crc kubenswrapper[4711]: I1202 10:47:51.924878 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmdv4" event={"ID":"40712fa7-bc4e-4062-b466-f8fc0af28d39","Type":"ContainerDied","Data":"0d1ba9db9be63ff81feb597f1bafdb9da81c814141564bcb184d46d8991c1dad"} Dec 02 10:47:52 crc kubenswrapper[4711]: I1202 10:47:52.586348 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:47:52 crc kubenswrapper[4711]: I1202 10:47:52.586499 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:47:52 crc kubenswrapper[4711]: I1202 10:47:52.586562 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:47:52 crc kubenswrapper[4711]: I1202 10:47:52.587727 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90d7b159bfc5894ef3714c163745ef7fbdd8eca8ae697756aad37eb187f934b6"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:47:52 crc kubenswrapper[4711]: I1202 10:47:52.587841 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://90d7b159bfc5894ef3714c163745ef7fbdd8eca8ae697756aad37eb187f934b6" gracePeriod=600 Dec 02 10:47:52 crc kubenswrapper[4711]: I1202 10:47:52.939076 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="90d7b159bfc5894ef3714c163745ef7fbdd8eca8ae697756aad37eb187f934b6" exitCode=0 Dec 02 10:47:52 crc kubenswrapper[4711]: I1202 10:47:52.939172 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"90d7b159bfc5894ef3714c163745ef7fbdd8eca8ae697756aad37eb187f934b6"} Dec 02 10:47:52 crc kubenswrapper[4711]: I1202 10:47:52.939760 4711 scope.go:117] "RemoveContainer" containerID="7e6d59d22c97bc3692dabe7db244bc975b3a706b0f92c8863a2f1aca24bde71c" Dec 02 10:47:53 crc kubenswrapper[4711]: I1202 10:47:53.953129 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec"} Dec 02 10:47:54 crc kubenswrapper[4711]: I1202 10:47:54.963920 4711 generic.go:334] "Generic (PLEG): container finished" podID="e572720a-5f65-485f-ad5b-76d5f7a782ac" containerID="cc1a879cefa6c0250130e4346153d6772dadc8e7c9d0008872a9ccad15471e91" exitCode=0 Dec 02 10:47:54 crc kubenswrapper[4711]: I1202 10:47:54.964110 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" event={"ID":"e572720a-5f65-485f-ad5b-76d5f7a782ac","Type":"ContainerDied","Data":"cc1a879cefa6c0250130e4346153d6772dadc8e7c9d0008872a9ccad15471e91"} Dec 02 10:47:55 crc kubenswrapper[4711]: I1202 10:47:55.981858 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmdv4" event={"ID":"40712fa7-bc4e-4062-b466-f8fc0af28d39","Type":"ContainerStarted","Data":"2892a89cce9714a99ce7aa5437946fe4ea22c48ad0e0c1c516bc423220018c34"} Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.505815 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.674680 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpzb7\" (UniqueName: \"kubernetes.io/projected/e572720a-5f65-485f-ad5b-76d5f7a782ac-kube-api-access-cpzb7\") pod \"e572720a-5f65-485f-ad5b-76d5f7a782ac\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.674833 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-ssh-key\") pod \"e572720a-5f65-485f-ad5b-76d5f7a782ac\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.674855 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-ovn-combined-ca-bundle\") pod \"e572720a-5f65-485f-ad5b-76d5f7a782ac\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.674979 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e572720a-5f65-485f-ad5b-76d5f7a782ac-ovncontroller-config-0\") pod \"e572720a-5f65-485f-ad5b-76d5f7a782ac\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.675141 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-inventory\") pod \"e572720a-5f65-485f-ad5b-76d5f7a782ac\" (UID: \"e572720a-5f65-485f-ad5b-76d5f7a782ac\") " Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.681559 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e572720a-5f65-485f-ad5b-76d5f7a782ac-kube-api-access-cpzb7" (OuterVolumeSpecName: "kube-api-access-cpzb7") pod "e572720a-5f65-485f-ad5b-76d5f7a782ac" (UID: "e572720a-5f65-485f-ad5b-76d5f7a782ac"). InnerVolumeSpecName "kube-api-access-cpzb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.682575 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e572720a-5f65-485f-ad5b-76d5f7a782ac" (UID: "e572720a-5f65-485f-ad5b-76d5f7a782ac"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.699933 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e572720a-5f65-485f-ad5b-76d5f7a782ac-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "e572720a-5f65-485f-ad5b-76d5f7a782ac" (UID: "e572720a-5f65-485f-ad5b-76d5f7a782ac"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.711281 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e572720a-5f65-485f-ad5b-76d5f7a782ac" (UID: "e572720a-5f65-485f-ad5b-76d5f7a782ac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.712042 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-inventory" (OuterVolumeSpecName: "inventory") pod "e572720a-5f65-485f-ad5b-76d5f7a782ac" (UID: "e572720a-5f65-485f-ad5b-76d5f7a782ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.778768 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.778817 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpzb7\" (UniqueName: \"kubernetes.io/projected/e572720a-5f65-485f-ad5b-76d5f7a782ac-kube-api-access-cpzb7\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.778835 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.778851 4711 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e572720a-5f65-485f-ad5b-76d5f7a782ac-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.778870 4711 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e572720a-5f65-485f-ad5b-76d5f7a782ac-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.994862 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.994914 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9hrn7" event={"ID":"e572720a-5f65-485f-ad5b-76d5f7a782ac","Type":"ContainerDied","Data":"19022ae003b435e9f73bc8b69be43d6dae18e0ded234b18b64ef0cbc0cfaaaae"} Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.995641 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19022ae003b435e9f73bc8b69be43d6dae18e0ded234b18b64ef0cbc0cfaaaae" Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.998451 4711 generic.go:334] "Generic (PLEG): container finished" podID="40712fa7-bc4e-4062-b466-f8fc0af28d39" containerID="2892a89cce9714a99ce7aa5437946fe4ea22c48ad0e0c1c516bc423220018c34" exitCode=0 Dec 02 10:47:56 crc kubenswrapper[4711]: I1202 10:47:56.998508 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmdv4" event={"ID":"40712fa7-bc4e-4062-b466-f8fc0af28d39","Type":"ContainerDied","Data":"2892a89cce9714a99ce7aa5437946fe4ea22c48ad0e0c1c516bc423220018c34"} Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.114782 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc"] Dec 02 10:47:57 crc kubenswrapper[4711]: E1202 10:47:57.115304 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e572720a-5f65-485f-ad5b-76d5f7a782ac" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.115367 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e572720a-5f65-485f-ad5b-76d5f7a782ac" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.119355 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="e572720a-5f65-485f-ad5b-76d5f7a782ac" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.121828 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.124662 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.124912 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.125179 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.125309 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.125429 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.125692 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.128055 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc"] Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.289425 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.289880 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.290181 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.290447 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.290767 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6mdf\" (UniqueName: \"kubernetes.io/projected/18808e54-ca3d-47a8-ae93-d05737319878-kube-api-access-n6mdf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.290829 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.393242 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.393366 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.393470 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6mdf\" (UniqueName: \"kubernetes.io/projected/18808e54-ca3d-47a8-ae93-d05737319878-kube-api-access-n6mdf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.393496 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.393556 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.393592 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.400420 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.400427 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.402433 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.404134 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.406120 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.418019 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6mdf\" (UniqueName: \"kubernetes.io/projected/18808e54-ca3d-47a8-ae93-d05737319878-kube-api-access-n6mdf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:57 crc kubenswrapper[4711]: I1202 10:47:57.440425 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:47:58 crc kubenswrapper[4711]: I1202 10:47:58.013674 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmdv4" event={"ID":"40712fa7-bc4e-4062-b466-f8fc0af28d39","Type":"ContainerStarted","Data":"721d9c2fa1339070a2eecbf5491595556f3af840f889963148cd677e66b28543"} Dec 02 10:47:58 crc kubenswrapper[4711]: I1202 10:47:58.048489 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qmdv4" podStartSLOduration=3.455359133 podStartE2EDuration="9.048458245s" podCreationTimestamp="2025-12-02 10:47:49 +0000 UTC" firstStartedPulling="2025-12-02 10:47:51.92792687 +0000 UTC m=+2061.637293327" lastFinishedPulling="2025-12-02 10:47:57.521025982 +0000 UTC m=+2067.230392439" observedRunningTime="2025-12-02 10:47:58.041661267 +0000 UTC m=+2067.751027714" watchObservedRunningTime="2025-12-02 10:47:58.048458245 +0000 UTC m=+2067.757824692" Dec 02 10:47:58 crc kubenswrapper[4711]: I1202 10:47:58.055879 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc"] Dec 02 10:47:59 crc kubenswrapper[4711]: I1202 10:47:59.023691 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" event={"ID":"18808e54-ca3d-47a8-ae93-d05737319878","Type":"ContainerStarted","Data":"cff601c46da6cfbeaae2dc10d24a3959cc966b005da939cacb68357d0f1fa208"} Dec 02 10:47:59 crc kubenswrapper[4711]: I1202 10:47:59.024033 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" event={"ID":"18808e54-ca3d-47a8-ae93-d05737319878","Type":"ContainerStarted","Data":"bb1d2ed731241a983757b491468b0ccd9ca986305cc18bfcfdcd7b65436cbb4c"} Dec 02 10:47:59 crc kubenswrapper[4711]: I1202 10:47:59.045266 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" podStartSLOduration=1.53815301 podStartE2EDuration="2.045246211s" podCreationTimestamp="2025-12-02 10:47:57 +0000 UTC" firstStartedPulling="2025-12-02 10:47:58.061629081 +0000 UTC m=+2067.770995528" lastFinishedPulling="2025-12-02 10:47:58.568722281 +0000 UTC m=+2068.278088729" observedRunningTime="2025-12-02 10:47:59.039920121 +0000 UTC m=+2068.749286568" watchObservedRunningTime="2025-12-02 10:47:59.045246211 +0000 UTC m=+2068.754612668" Dec 02 10:48:00 crc kubenswrapper[4711]: I1202 10:48:00.265132 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:48:00 crc kubenswrapper[4711]: I1202 10:48:00.266306 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:48:00 crc kubenswrapper[4711]: I1202 10:48:00.353002 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:48:02 crc kubenswrapper[4711]: I1202 10:48:02.113219 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qmdv4" Dec 02 10:48:04 crc kubenswrapper[4711]: I1202 10:48:04.052759 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qmdv4"] Dec 02 10:48:04 crc kubenswrapper[4711]: I1202 10:48:04.414377 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bl9gw"] Dec 02 10:48:04 crc kubenswrapper[4711]: I1202 10:48:04.414694 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bl9gw" podUID="9c2e1ec8-8c64-4bf3-a577-0db5a91328de" containerName="registry-server" containerID="cri-o://b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb" gracePeriod=2 Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.033891 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.092145 4711 generic.go:334] "Generic (PLEG): container finished" podID="9c2e1ec8-8c64-4bf3-a577-0db5a91328de" containerID="b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb" exitCode=0 Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.092244 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bl9gw" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.092242 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bl9gw" event={"ID":"9c2e1ec8-8c64-4bf3-a577-0db5a91328de","Type":"ContainerDied","Data":"b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb"} Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.092325 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bl9gw" event={"ID":"9c2e1ec8-8c64-4bf3-a577-0db5a91328de","Type":"ContainerDied","Data":"588a08df06f4c02ba35d262994cab287e0975522e1539427dae2a94b99e949d5"} Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.092348 4711 scope.go:117] "RemoveContainer" containerID="b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.119496 4711 scope.go:117] "RemoveContainer" containerID="c6e1270a0b8c2e893327ea0b96ac0789d31061b053eb8adca54504916c4919b9" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.141870 4711 scope.go:117] "RemoveContainer" containerID="2dbc144cf949f150c20ad7959461a32f0dd37e56e59cba6346fc54802d966bab" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.170173 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-utilities\") pod \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\" (UID: \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\") " Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.170415 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgshq\" (UniqueName: \"kubernetes.io/projected/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-kube-api-access-hgshq\") pod \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\" (UID: \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\") " Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.170615 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-catalog-content\") pod \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\" (UID: \"9c2e1ec8-8c64-4bf3-a577-0db5a91328de\") " Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.170918 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-utilities" (OuterVolumeSpecName: "utilities") pod "9c2e1ec8-8c64-4bf3-a577-0db5a91328de" (UID: "9c2e1ec8-8c64-4bf3-a577-0db5a91328de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.171269 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.178214 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-kube-api-access-hgshq" (OuterVolumeSpecName: "kube-api-access-hgshq") pod "9c2e1ec8-8c64-4bf3-a577-0db5a91328de" (UID: "9c2e1ec8-8c64-4bf3-a577-0db5a91328de"). InnerVolumeSpecName "kube-api-access-hgshq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.224233 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c2e1ec8-8c64-4bf3-a577-0db5a91328de" (UID: "9c2e1ec8-8c64-4bf3-a577-0db5a91328de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.241591 4711 scope.go:117] "RemoveContainer" containerID="b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb" Dec 02 10:48:05 crc kubenswrapper[4711]: E1202 10:48:05.242020 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb\": container with ID starting with b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb not found: ID does not exist" containerID="b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.242117 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb"} err="failed to get container status \"b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb\": rpc error: code = NotFound desc = could not find container \"b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb\": container with ID starting with b4aa404994a7c1a631409eb0f565879ca0e0c749db30d1ccda434236d38e90bb not found: ID does not exist" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.242199 4711 scope.go:117] "RemoveContainer" containerID="c6e1270a0b8c2e893327ea0b96ac0789d31061b053eb8adca54504916c4919b9" Dec 02 10:48:05 crc kubenswrapper[4711]: E1202 10:48:05.243208 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e1270a0b8c2e893327ea0b96ac0789d31061b053eb8adca54504916c4919b9\": container with ID starting with c6e1270a0b8c2e893327ea0b96ac0789d31061b053eb8adca54504916c4919b9 not found: ID does not exist" containerID="c6e1270a0b8c2e893327ea0b96ac0789d31061b053eb8adca54504916c4919b9" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.243301 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e1270a0b8c2e893327ea0b96ac0789d31061b053eb8adca54504916c4919b9"} err="failed to get container status \"c6e1270a0b8c2e893327ea0b96ac0789d31061b053eb8adca54504916c4919b9\": rpc error: code = NotFound desc = could not find container \"c6e1270a0b8c2e893327ea0b96ac0789d31061b053eb8adca54504916c4919b9\": container with ID starting with c6e1270a0b8c2e893327ea0b96ac0789d31061b053eb8adca54504916c4919b9 not found: ID does not exist" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.243368 4711 scope.go:117] "RemoveContainer" containerID="2dbc144cf949f150c20ad7959461a32f0dd37e56e59cba6346fc54802d966bab" Dec 02 10:48:05 crc kubenswrapper[4711]: E1202 10:48:05.249876 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dbc144cf949f150c20ad7959461a32f0dd37e56e59cba6346fc54802d966bab\": container with ID starting with 2dbc144cf949f150c20ad7959461a32f0dd37e56e59cba6346fc54802d966bab not found: ID does not exist" containerID="2dbc144cf949f150c20ad7959461a32f0dd37e56e59cba6346fc54802d966bab" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.249913 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dbc144cf949f150c20ad7959461a32f0dd37e56e59cba6346fc54802d966bab"} err="failed to get container status \"2dbc144cf949f150c20ad7959461a32f0dd37e56e59cba6346fc54802d966bab\": rpc error: code = NotFound desc = could not find container \"2dbc144cf949f150c20ad7959461a32f0dd37e56e59cba6346fc54802d966bab\": container with ID starting with 2dbc144cf949f150c20ad7959461a32f0dd37e56e59cba6346fc54802d966bab not found: ID does not exist" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.272426 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.272686 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgshq\" (UniqueName: \"kubernetes.io/projected/9c2e1ec8-8c64-4bf3-a577-0db5a91328de-kube-api-access-hgshq\") on node \"crc\" DevicePath \"\"" Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.430622 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bl9gw"] Dec 02 10:48:05 crc kubenswrapper[4711]: I1202 10:48:05.440008 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bl9gw"] Dec 02 10:48:07 crc kubenswrapper[4711]: I1202 10:48:07.089701 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c2e1ec8-8c64-4bf3-a577-0db5a91328de" path="/var/lib/kubelet/pods/9c2e1ec8-8c64-4bf3-a577-0db5a91328de/volumes" Dec 02 10:48:47 crc kubenswrapper[4711]: I1202 10:48:47.514681 4711 generic.go:334] "Generic (PLEG): container finished" podID="18808e54-ca3d-47a8-ae93-d05737319878" containerID="cff601c46da6cfbeaae2dc10d24a3959cc966b005da939cacb68357d0f1fa208" exitCode=0 Dec 02 10:48:47 crc kubenswrapper[4711]: I1202 10:48:47.514975 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" event={"ID":"18808e54-ca3d-47a8-ae93-d05737319878","Type":"ContainerDied","Data":"cff601c46da6cfbeaae2dc10d24a3959cc966b005da939cacb68357d0f1fa208"} Dec 02 10:48:48 crc kubenswrapper[4711]: I1202 10:48:48.952882 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.052331 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6mdf\" (UniqueName: \"kubernetes.io/projected/18808e54-ca3d-47a8-ae93-d05737319878-kube-api-access-n6mdf\") pod \"18808e54-ca3d-47a8-ae93-d05737319878\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.052460 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-neutron-metadata-combined-ca-bundle\") pod \"18808e54-ca3d-47a8-ae93-d05737319878\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.052632 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-neutron-ovn-metadata-agent-neutron-config-0\") pod \"18808e54-ca3d-47a8-ae93-d05737319878\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.052699 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-ssh-key\") pod \"18808e54-ca3d-47a8-ae93-d05737319878\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.052787 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-nova-metadata-neutron-config-0\") pod \"18808e54-ca3d-47a8-ae93-d05737319878\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.052823 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-inventory\") pod \"18808e54-ca3d-47a8-ae93-d05737319878\" (UID: \"18808e54-ca3d-47a8-ae93-d05737319878\") " Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.058815 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18808e54-ca3d-47a8-ae93-d05737319878-kube-api-access-n6mdf" (OuterVolumeSpecName: "kube-api-access-n6mdf") pod "18808e54-ca3d-47a8-ae93-d05737319878" (UID: "18808e54-ca3d-47a8-ae93-d05737319878"). InnerVolumeSpecName "kube-api-access-n6mdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.059510 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "18808e54-ca3d-47a8-ae93-d05737319878" (UID: "18808e54-ca3d-47a8-ae93-d05737319878"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.090442 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "18808e54-ca3d-47a8-ae93-d05737319878" (UID: "18808e54-ca3d-47a8-ae93-d05737319878"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.094004 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "18808e54-ca3d-47a8-ae93-d05737319878" (UID: "18808e54-ca3d-47a8-ae93-d05737319878"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.108947 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "18808e54-ca3d-47a8-ae93-d05737319878" (UID: "18808e54-ca3d-47a8-ae93-d05737319878"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.110389 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-inventory" (OuterVolumeSpecName: "inventory") pod "18808e54-ca3d-47a8-ae93-d05737319878" (UID: "18808e54-ca3d-47a8-ae93-d05737319878"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.155372 4711 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.155413 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.155523 4711 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.155542 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.155554 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6mdf\" (UniqueName: \"kubernetes.io/projected/18808e54-ca3d-47a8-ae93-d05737319878-kube-api-access-n6mdf\") on node \"crc\" DevicePath \"\"" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.155568 4711 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18808e54-ca3d-47a8-ae93-d05737319878-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.537558 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" event={"ID":"18808e54-ca3d-47a8-ae93-d05737319878","Type":"ContainerDied","Data":"bb1d2ed731241a983757b491468b0ccd9ca986305cc18bfcfdcd7b65436cbb4c"} Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.537632 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.537636 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb1d2ed731241a983757b491468b0ccd9ca986305cc18bfcfdcd7b65436cbb4c" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.650274 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n"] Dec 02 10:48:49 crc kubenswrapper[4711]: E1202 10:48:49.650848 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18808e54-ca3d-47a8-ae93-d05737319878" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.650889 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="18808e54-ca3d-47a8-ae93-d05737319878" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 02 10:48:49 crc kubenswrapper[4711]: E1202 10:48:49.650921 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2e1ec8-8c64-4bf3-a577-0db5a91328de" containerName="extract-utilities" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.650929 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2e1ec8-8c64-4bf3-a577-0db5a91328de" containerName="extract-utilities" Dec 02 10:48:49 crc kubenswrapper[4711]: E1202 10:48:49.650985 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2e1ec8-8c64-4bf3-a577-0db5a91328de" containerName="registry-server" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.650995 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2e1ec8-8c64-4bf3-a577-0db5a91328de" containerName="registry-server" Dec 02 10:48:49 crc kubenswrapper[4711]: E1202 10:48:49.651009 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2e1ec8-8c64-4bf3-a577-0db5a91328de" containerName="extract-content" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.651016 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2e1ec8-8c64-4bf3-a577-0db5a91328de" containerName="extract-content" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.651246 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="18808e54-ca3d-47a8-ae93-d05737319878" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.651284 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c2e1ec8-8c64-4bf3-a577-0db5a91328de" containerName="registry-server" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.652345 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.656004 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.656147 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.656019 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.656470 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.656677 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.662390 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n"] Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.773062 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.773767 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.773846 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.774166 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.774392 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6ts8\" (UniqueName: \"kubernetes.io/projected/48489c70-bfb2-4dbf-b002-1dcdb3da737f-kube-api-access-q6ts8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.876255 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6ts8\" (UniqueName: \"kubernetes.io/projected/48489c70-bfb2-4dbf-b002-1dcdb3da737f-kube-api-access-q6ts8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.876378 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.876412 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.876438 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.876479 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.881691 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.895021 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.895851 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.896876 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.907019 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6ts8\" (UniqueName: \"kubernetes.io/projected/48489c70-bfb2-4dbf-b002-1dcdb3da737f-kube-api-access-q6ts8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-czk8n\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:49 crc kubenswrapper[4711]: I1202 10:48:49.973838 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:48:50 crc kubenswrapper[4711]: I1202 10:48:50.349637 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n"] Dec 02 10:48:50 crc kubenswrapper[4711]: W1202 10:48:50.358082 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48489c70_bfb2_4dbf_b002_1dcdb3da737f.slice/crio-7a64999e027ce06e7cc03a7c4220865574c2ac709876bb750bedc8c68187c2f1 WatchSource:0}: Error finding container 7a64999e027ce06e7cc03a7c4220865574c2ac709876bb750bedc8c68187c2f1: Status 404 returned error can't find the container with id 7a64999e027ce06e7cc03a7c4220865574c2ac709876bb750bedc8c68187c2f1 Dec 02 10:48:50 crc kubenswrapper[4711]: I1202 10:48:50.547099 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" event={"ID":"48489c70-bfb2-4dbf-b002-1dcdb3da737f","Type":"ContainerStarted","Data":"7a64999e027ce06e7cc03a7c4220865574c2ac709876bb750bedc8c68187c2f1"} Dec 02 10:48:52 crc kubenswrapper[4711]: I1202 10:48:52.575710 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" event={"ID":"48489c70-bfb2-4dbf-b002-1dcdb3da737f","Type":"ContainerStarted","Data":"1a9faf83304b7246e3b2455e992da26816952bfe17e24b0f08fcd7a90c2c42b1"} Dec 02 10:48:52 crc kubenswrapper[4711]: I1202 10:48:52.600608 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" podStartSLOduration=1.842698977 podStartE2EDuration="3.600549174s" podCreationTimestamp="2025-12-02 10:48:49 +0000 UTC" firstStartedPulling="2025-12-02 10:48:50.36288966 +0000 UTC m=+2120.072256107" lastFinishedPulling="2025-12-02 10:48:52.120739767 +0000 UTC m=+2121.830106304" observedRunningTime="2025-12-02 10:48:52.592078152 +0000 UTC m=+2122.301444609" watchObservedRunningTime="2025-12-02 10:48:52.600549174 +0000 UTC m=+2122.309915641" Dec 02 10:49:52 crc kubenswrapper[4711]: I1202 10:49:52.586352 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:49:52 crc kubenswrapper[4711]: I1202 10:49:52.587024 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:50:22 crc kubenswrapper[4711]: I1202 10:50:22.586325 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:50:22 crc kubenswrapper[4711]: I1202 10:50:22.587128 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:50:52 crc kubenswrapper[4711]: I1202 10:50:52.586319 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:50:52 crc kubenswrapper[4711]: I1202 10:50:52.586978 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:50:52 crc kubenswrapper[4711]: I1202 10:50:52.587158 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:50:52 crc kubenswrapper[4711]: I1202 10:50:52.588121 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:50:52 crc kubenswrapper[4711]: I1202 10:50:52.588227 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" gracePeriod=600 Dec 02 10:50:52 crc kubenswrapper[4711]: E1202 10:50:52.725379 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:50:52 crc kubenswrapper[4711]: I1202 10:50:52.953024 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" exitCode=0 Dec 02 10:50:52 crc kubenswrapper[4711]: I1202 10:50:52.953259 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec"} Dec 02 10:50:52 crc kubenswrapper[4711]: I1202 10:50:52.954074 4711 scope.go:117] "RemoveContainer" containerID="90d7b159bfc5894ef3714c163745ef7fbdd8eca8ae697756aad37eb187f934b6" Dec 02 10:50:52 crc kubenswrapper[4711]: I1202 10:50:52.955018 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:50:52 crc kubenswrapper[4711]: E1202 10:50:52.955642 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:51:07 crc kubenswrapper[4711]: I1202 10:51:07.079195 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:51:07 crc kubenswrapper[4711]: E1202 10:51:07.080272 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:51:19 crc kubenswrapper[4711]: I1202 10:51:19.078943 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:51:19 crc kubenswrapper[4711]: E1202 10:51:19.079922 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:51:33 crc kubenswrapper[4711]: I1202 10:51:33.078592 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:51:33 crc kubenswrapper[4711]: E1202 10:51:33.079389 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:51:44 crc kubenswrapper[4711]: I1202 10:51:44.078448 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:51:44 crc kubenswrapper[4711]: E1202 10:51:44.079302 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:51:57 crc kubenswrapper[4711]: I1202 10:51:57.079260 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:51:57 crc kubenswrapper[4711]: E1202 10:51:57.079889 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:52:11 crc kubenswrapper[4711]: I1202 10:52:11.087003 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:52:11 crc kubenswrapper[4711]: E1202 10:52:11.087968 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:52:26 crc kubenswrapper[4711]: I1202 10:52:26.078528 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:52:26 crc kubenswrapper[4711]: E1202 10:52:26.079542 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:52:37 crc kubenswrapper[4711]: I1202 10:52:37.078581 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:52:37 crc kubenswrapper[4711]: E1202 10:52:37.079327 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:52:50 crc kubenswrapper[4711]: I1202 10:52:50.174863 4711 generic.go:334] "Generic (PLEG): container finished" podID="48489c70-bfb2-4dbf-b002-1dcdb3da737f" containerID="1a9faf83304b7246e3b2455e992da26816952bfe17e24b0f08fcd7a90c2c42b1" exitCode=0 Dec 02 10:52:50 crc kubenswrapper[4711]: I1202 10:52:50.175025 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" event={"ID":"48489c70-bfb2-4dbf-b002-1dcdb3da737f","Type":"ContainerDied","Data":"1a9faf83304b7246e3b2455e992da26816952bfe17e24b0f08fcd7a90c2c42b1"} Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.092470 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:52:51 crc kubenswrapper[4711]: E1202 10:52:51.092988 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.685698 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.755183 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-libvirt-secret-0\") pod \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.755294 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-ssh-key\") pod \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.755393 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-inventory\") pod \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.755422 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6ts8\" (UniqueName: \"kubernetes.io/projected/48489c70-bfb2-4dbf-b002-1dcdb3da737f-kube-api-access-q6ts8\") pod \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.755486 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-libvirt-combined-ca-bundle\") pod \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\" (UID: \"48489c70-bfb2-4dbf-b002-1dcdb3da737f\") " Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.769217 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "48489c70-bfb2-4dbf-b002-1dcdb3da737f" (UID: "48489c70-bfb2-4dbf-b002-1dcdb3da737f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.799118 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48489c70-bfb2-4dbf-b002-1dcdb3da737f-kube-api-access-q6ts8" (OuterVolumeSpecName: "kube-api-access-q6ts8") pod "48489c70-bfb2-4dbf-b002-1dcdb3da737f" (UID: "48489c70-bfb2-4dbf-b002-1dcdb3da737f"). InnerVolumeSpecName "kube-api-access-q6ts8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.828303 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "48489c70-bfb2-4dbf-b002-1dcdb3da737f" (UID: "48489c70-bfb2-4dbf-b002-1dcdb3da737f"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.857328 4711 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.857356 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6ts8\" (UniqueName: \"kubernetes.io/projected/48489c70-bfb2-4dbf-b002-1dcdb3da737f-kube-api-access-q6ts8\") on node \"crc\" DevicePath \"\"" Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.857367 4711 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.863277 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "48489c70-bfb2-4dbf-b002-1dcdb3da737f" (UID: "48489c70-bfb2-4dbf-b002-1dcdb3da737f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.874187 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-inventory" (OuterVolumeSpecName: "inventory") pod "48489c70-bfb2-4dbf-b002-1dcdb3da737f" (UID: "48489c70-bfb2-4dbf-b002-1dcdb3da737f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.959520 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:52:51 crc kubenswrapper[4711]: I1202 10:52:51.959559 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48489c70-bfb2-4dbf-b002-1dcdb3da737f-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.202398 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" event={"ID":"48489c70-bfb2-4dbf-b002-1dcdb3da737f","Type":"ContainerDied","Data":"7a64999e027ce06e7cc03a7c4220865574c2ac709876bb750bedc8c68187c2f1"} Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.202474 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a64999e027ce06e7cc03a7c4220865574c2ac709876bb750bedc8c68187c2f1" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.202498 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-czk8n" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.299300 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh"] Dec 02 10:52:52 crc kubenswrapper[4711]: E1202 10:52:52.300074 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48489c70-bfb2-4dbf-b002-1dcdb3da737f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.300125 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="48489c70-bfb2-4dbf-b002-1dcdb3da737f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.300484 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="48489c70-bfb2-4dbf-b002-1dcdb3da737f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.301516 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.304049 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.304292 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.304367 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.304367 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.304518 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.304528 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.312668 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.336785 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh"] Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.368924 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.369110 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.369152 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.369220 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6fcz\" (UniqueName: \"kubernetes.io/projected/45d45e5b-27e6-42bf-863d-e04caf847040-kube-api-access-z6fcz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.369273 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.369310 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.369360 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.369546 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/45d45e5b-27e6-42bf-863d-e04caf847040-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.369659 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.471181 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/45d45e5b-27e6-42bf-863d-e04caf847040-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.471249 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.471280 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.471325 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.471357 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.471393 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6fcz\" (UniqueName: \"kubernetes.io/projected/45d45e5b-27e6-42bf-863d-e04caf847040-kube-api-access-z6fcz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.471420 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.471450 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.471483 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.472888 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/45d45e5b-27e6-42bf-863d-e04caf847040-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.476195 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.476373 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.476665 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.478059 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.483734 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.483996 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.485181 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.488048 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6fcz\" (UniqueName: \"kubernetes.io/projected/45d45e5b-27e6-42bf-863d-e04caf847040-kube-api-access-z6fcz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-96fdh\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:52 crc kubenswrapper[4711]: I1202 10:52:52.632905 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:52:53 crc kubenswrapper[4711]: I1202 10:52:53.212774 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh"] Dec 02 10:52:53 crc kubenswrapper[4711]: I1202 10:52:53.228195 4711 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:52:54 crc kubenswrapper[4711]: I1202 10:52:54.223846 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" event={"ID":"45d45e5b-27e6-42bf-863d-e04caf847040","Type":"ContainerStarted","Data":"ab0ee6474831ecae0ca1734d1590839608fdc95ea15bb513a18c32821a3fba32"} Dec 02 10:52:55 crc kubenswrapper[4711]: I1202 10:52:55.234798 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" event={"ID":"45d45e5b-27e6-42bf-863d-e04caf847040","Type":"ContainerStarted","Data":"c14cefd10c01485ac511d472dfa09394e5b92136d2cffb3274f6558baf373278"} Dec 02 10:52:55 crc kubenswrapper[4711]: I1202 10:52:55.258546 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" podStartSLOduration=2.374560201 podStartE2EDuration="3.258466481s" podCreationTimestamp="2025-12-02 10:52:52 +0000 UTC" firstStartedPulling="2025-12-02 10:52:53.227909234 +0000 UTC m=+2362.937275681" lastFinishedPulling="2025-12-02 10:52:54.111815504 +0000 UTC m=+2363.821181961" observedRunningTime="2025-12-02 10:52:55.256067457 +0000 UTC m=+2364.965433904" watchObservedRunningTime="2025-12-02 10:52:55.258466481 +0000 UTC m=+2364.967832928" Dec 02 10:53:06 crc kubenswrapper[4711]: I1202 10:53:06.079177 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:53:06 crc kubenswrapper[4711]: E1202 10:53:06.079881 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:53:20 crc kubenswrapper[4711]: I1202 10:53:20.079106 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:53:20 crc kubenswrapper[4711]: E1202 10:53:20.079875 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:53:33 crc kubenswrapper[4711]: I1202 10:53:33.079151 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:53:33 crc kubenswrapper[4711]: E1202 10:53:33.080052 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:53:45 crc kubenswrapper[4711]: I1202 10:53:45.079112 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:53:45 crc kubenswrapper[4711]: E1202 10:53:45.079914 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:53:56 crc kubenswrapper[4711]: I1202 10:53:56.078197 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:53:56 crc kubenswrapper[4711]: E1202 10:53:56.079167 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:54:07 crc kubenswrapper[4711]: I1202 10:54:07.079213 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:54:07 crc kubenswrapper[4711]: E1202 10:54:07.080279 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:54:20 crc kubenswrapper[4711]: I1202 10:54:20.078842 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:54:20 crc kubenswrapper[4711]: E1202 10:54:20.079803 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:54:35 crc kubenswrapper[4711]: I1202 10:54:35.078864 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:54:35 crc kubenswrapper[4711]: E1202 10:54:35.079594 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:54:49 crc kubenswrapper[4711]: I1202 10:54:49.079168 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:54:49 crc kubenswrapper[4711]: E1202 10:54:49.080081 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:55:03 crc kubenswrapper[4711]: I1202 10:55:03.079206 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:55:03 crc kubenswrapper[4711]: E1202 10:55:03.080140 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:55:14 crc kubenswrapper[4711]: I1202 10:55:14.078900 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:55:14 crc kubenswrapper[4711]: E1202 10:55:14.079907 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:55:26 crc kubenswrapper[4711]: I1202 10:55:26.078535 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:55:26 crc kubenswrapper[4711]: E1202 10:55:26.079570 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:55:37 crc kubenswrapper[4711]: I1202 10:55:37.079363 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:55:37 crc kubenswrapper[4711]: E1202 10:55:37.080598 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:55:37 crc kubenswrapper[4711]: I1202 10:55:37.960361 4711 generic.go:334] "Generic (PLEG): container finished" podID="45d45e5b-27e6-42bf-863d-e04caf847040" containerID="c14cefd10c01485ac511d472dfa09394e5b92136d2cffb3274f6558baf373278" exitCode=0 Dec 02 10:55:37 crc kubenswrapper[4711]: I1202 10:55:37.960418 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" event={"ID":"45d45e5b-27e6-42bf-863d-e04caf847040","Type":"ContainerDied","Data":"c14cefd10c01485ac511d472dfa09394e5b92136d2cffb3274f6558baf373278"} Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.393251 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.465427 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-combined-ca-bundle\") pod \"45d45e5b-27e6-42bf-863d-e04caf847040\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.465524 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-cell1-compute-config-0\") pod \"45d45e5b-27e6-42bf-863d-e04caf847040\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.465589 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-migration-ssh-key-0\") pod \"45d45e5b-27e6-42bf-863d-e04caf847040\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.465712 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-migration-ssh-key-1\") pod \"45d45e5b-27e6-42bf-863d-e04caf847040\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.465823 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-cell1-compute-config-1\") pod \"45d45e5b-27e6-42bf-863d-e04caf847040\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.465852 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-ssh-key\") pod \"45d45e5b-27e6-42bf-863d-e04caf847040\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.465895 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/45d45e5b-27e6-42bf-863d-e04caf847040-nova-extra-config-0\") pod \"45d45e5b-27e6-42bf-863d-e04caf847040\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.465929 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-inventory\") pod \"45d45e5b-27e6-42bf-863d-e04caf847040\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.465972 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6fcz\" (UniqueName: \"kubernetes.io/projected/45d45e5b-27e6-42bf-863d-e04caf847040-kube-api-access-z6fcz\") pod \"45d45e5b-27e6-42bf-863d-e04caf847040\" (UID: \"45d45e5b-27e6-42bf-863d-e04caf847040\") " Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.481291 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "45d45e5b-27e6-42bf-863d-e04caf847040" (UID: "45d45e5b-27e6-42bf-863d-e04caf847040"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.481552 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d45e5b-27e6-42bf-863d-e04caf847040-kube-api-access-z6fcz" (OuterVolumeSpecName: "kube-api-access-z6fcz") pod "45d45e5b-27e6-42bf-863d-e04caf847040" (UID: "45d45e5b-27e6-42bf-863d-e04caf847040"). InnerVolumeSpecName "kube-api-access-z6fcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.494315 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d45e5b-27e6-42bf-863d-e04caf847040-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "45d45e5b-27e6-42bf-863d-e04caf847040" (UID: "45d45e5b-27e6-42bf-863d-e04caf847040"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.505349 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "45d45e5b-27e6-42bf-863d-e04caf847040" (UID: "45d45e5b-27e6-42bf-863d-e04caf847040"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.505835 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "45d45e5b-27e6-42bf-863d-e04caf847040" (UID: "45d45e5b-27e6-42bf-863d-e04caf847040"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.508128 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "45d45e5b-27e6-42bf-863d-e04caf847040" (UID: "45d45e5b-27e6-42bf-863d-e04caf847040"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.509196 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "45d45e5b-27e6-42bf-863d-e04caf847040" (UID: "45d45e5b-27e6-42bf-863d-e04caf847040"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.509281 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "45d45e5b-27e6-42bf-863d-e04caf847040" (UID: "45d45e5b-27e6-42bf-863d-e04caf847040"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.510866 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-inventory" (OuterVolumeSpecName: "inventory") pod "45d45e5b-27e6-42bf-863d-e04caf847040" (UID: "45d45e5b-27e6-42bf-863d-e04caf847040"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.567579 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6fcz\" (UniqueName: \"kubernetes.io/projected/45d45e5b-27e6-42bf-863d-e04caf847040-kube-api-access-z6fcz\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.567620 4711 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.567638 4711 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.567654 4711 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.567670 4711 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.567684 4711 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.567700 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.567717 4711 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/45d45e5b-27e6-42bf-863d-e04caf847040-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.567732 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d45e5b-27e6-42bf-863d-e04caf847040-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.982861 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" event={"ID":"45d45e5b-27e6-42bf-863d-e04caf847040","Type":"ContainerDied","Data":"ab0ee6474831ecae0ca1734d1590839608fdc95ea15bb513a18c32821a3fba32"} Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.983163 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab0ee6474831ecae0ca1734d1590839608fdc95ea15bb513a18c32821a3fba32" Dec 02 10:55:39 crc kubenswrapper[4711]: I1202 10:55:39.982996 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-96fdh" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.141042 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5"] Dec 02 10:55:40 crc kubenswrapper[4711]: E1202 10:55:40.141600 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d45e5b-27e6-42bf-863d-e04caf847040" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.141626 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d45e5b-27e6-42bf-863d-e04caf847040" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.141811 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d45e5b-27e6-42bf-863d-e04caf847040" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.142605 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.145733 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.146055 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.146520 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.146690 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.150788 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdvbl" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.156848 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5"] Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.214714 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.214967 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l77br\" (UniqueName: \"kubernetes.io/projected/5192ee19-472c-4f7c-b41d-4a11b518b045-kube-api-access-l77br\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.215090 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.215264 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.215347 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.215420 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.215503 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.316924 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.316983 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.317009 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.317038 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.317070 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.317093 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l77br\" (UniqueName: \"kubernetes.io/projected/5192ee19-472c-4f7c-b41d-4a11b518b045-kube-api-access-l77br\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.317129 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.321509 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.324413 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.327285 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.332345 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.333652 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.333731 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.338003 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l77br\" (UniqueName: \"kubernetes.io/projected/5192ee19-472c-4f7c-b41d-4a11b518b045-kube-api-access-l77br\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:40 crc kubenswrapper[4711]: I1202 10:55:40.469997 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:55:41 crc kubenswrapper[4711]: I1202 10:55:41.025309 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5"] Dec 02 10:55:42 crc kubenswrapper[4711]: I1202 10:55:42.027072 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" event={"ID":"5192ee19-472c-4f7c-b41d-4a11b518b045","Type":"ContainerStarted","Data":"63e355829381b2f268049bdc293c358d22e64949d412fd1987b85679684dfe03"} Dec 02 10:55:42 crc kubenswrapper[4711]: I1202 10:55:42.027391 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" event={"ID":"5192ee19-472c-4f7c-b41d-4a11b518b045","Type":"ContainerStarted","Data":"27a771c30bc7edc8cce00b6970e7bae5124a797a34fc82ce8bb5a88db422c046"} Dec 02 10:55:42 crc kubenswrapper[4711]: I1202 10:55:42.053481 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" podStartSLOduration=1.41327125 podStartE2EDuration="2.053339329s" podCreationTimestamp="2025-12-02 10:55:40 +0000 UTC" firstStartedPulling="2025-12-02 10:55:41.023720233 +0000 UTC m=+2530.733086690" lastFinishedPulling="2025-12-02 10:55:41.663788302 +0000 UTC m=+2531.373154769" observedRunningTime="2025-12-02 10:55:42.050875683 +0000 UTC m=+2531.760242210" watchObservedRunningTime="2025-12-02 10:55:42.053339329 +0000 UTC m=+2531.762705816" Dec 02 10:55:48 crc kubenswrapper[4711]: I1202 10:55:48.078608 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:55:48 crc kubenswrapper[4711]: E1202 10:55:48.079534 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 10:56:00 crc kubenswrapper[4711]: I1202 10:56:00.078887 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:56:01 crc kubenswrapper[4711]: I1202 10:56:01.228618 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"ce1d82b527eae7c31f9a034a481bb3607adffd804c9c681434ee7921132c3317"} Dec 02 10:56:35 crc kubenswrapper[4711]: I1202 10:56:35.929882 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vcr96"] Dec 02 10:56:35 crc kubenswrapper[4711]: I1202 10:56:35.932717 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:35 crc kubenswrapper[4711]: I1202 10:56:35.955700 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vcr96"] Dec 02 10:56:36 crc kubenswrapper[4711]: I1202 10:56:36.078814 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847cd3c3-7ce0-4214-be1a-b1084c9eecae-catalog-content\") pod \"redhat-operators-vcr96\" (UID: \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\") " pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:36 crc kubenswrapper[4711]: I1202 10:56:36.078881 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfzr8\" (UniqueName: \"kubernetes.io/projected/847cd3c3-7ce0-4214-be1a-b1084c9eecae-kube-api-access-xfzr8\") pod \"redhat-operators-vcr96\" (UID: \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\") " pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:36 crc kubenswrapper[4711]: I1202 10:56:36.080252 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847cd3c3-7ce0-4214-be1a-b1084c9eecae-utilities\") pod \"redhat-operators-vcr96\" (UID: \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\") " pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:36 crc kubenswrapper[4711]: I1202 10:56:36.182123 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847cd3c3-7ce0-4214-be1a-b1084c9eecae-utilities\") pod \"redhat-operators-vcr96\" (UID: \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\") " pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:36 crc kubenswrapper[4711]: I1202 10:56:36.182369 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847cd3c3-7ce0-4214-be1a-b1084c9eecae-catalog-content\") pod \"redhat-operators-vcr96\" (UID: \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\") " pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:36 crc kubenswrapper[4711]: I1202 10:56:36.182419 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfzr8\" (UniqueName: \"kubernetes.io/projected/847cd3c3-7ce0-4214-be1a-b1084c9eecae-kube-api-access-xfzr8\") pod \"redhat-operators-vcr96\" (UID: \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\") " pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:36 crc kubenswrapper[4711]: I1202 10:56:36.182995 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847cd3c3-7ce0-4214-be1a-b1084c9eecae-catalog-content\") pod \"redhat-operators-vcr96\" (UID: \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\") " pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:36 crc kubenswrapper[4711]: I1202 10:56:36.183334 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847cd3c3-7ce0-4214-be1a-b1084c9eecae-utilities\") pod \"redhat-operators-vcr96\" (UID: \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\") " pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:36 crc kubenswrapper[4711]: I1202 10:56:36.208790 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfzr8\" (UniqueName: \"kubernetes.io/projected/847cd3c3-7ce0-4214-be1a-b1084c9eecae-kube-api-access-xfzr8\") pod \"redhat-operators-vcr96\" (UID: \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\") " pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:36 crc kubenswrapper[4711]: I1202 10:56:36.288724 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:36 crc kubenswrapper[4711]: I1202 10:56:36.802478 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vcr96"] Dec 02 10:56:37 crc kubenswrapper[4711]: I1202 10:56:37.574104 4711 generic.go:334] "Generic (PLEG): container finished" podID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" containerID="6c1294a4b35607f14d9984c94b9ad492054933396f5f1a83daec081e4208efad" exitCode=0 Dec 02 10:56:37 crc kubenswrapper[4711]: I1202 10:56:37.574158 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcr96" event={"ID":"847cd3c3-7ce0-4214-be1a-b1084c9eecae","Type":"ContainerDied","Data":"6c1294a4b35607f14d9984c94b9ad492054933396f5f1a83daec081e4208efad"} Dec 02 10:56:37 crc kubenswrapper[4711]: I1202 10:56:37.574207 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcr96" event={"ID":"847cd3c3-7ce0-4214-be1a-b1084c9eecae","Type":"ContainerStarted","Data":"8bb61bbb15d52df19c98766081a24d0ff47b95dd369c4d27d5c39e9325dba03f"} Dec 02 10:56:38 crc kubenswrapper[4711]: I1202 10:56:38.584835 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcr96" event={"ID":"847cd3c3-7ce0-4214-be1a-b1084c9eecae","Type":"ContainerStarted","Data":"49c2d9b51f9f99abb825fa848259cd5f39a006a71042292e81aad7dc2f4df8fa"} Dec 02 10:56:40 crc kubenswrapper[4711]: I1202 10:56:40.608813 4711 generic.go:334] "Generic (PLEG): container finished" podID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" containerID="49c2d9b51f9f99abb825fa848259cd5f39a006a71042292e81aad7dc2f4df8fa" exitCode=0 Dec 02 10:56:40 crc kubenswrapper[4711]: I1202 10:56:40.608865 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcr96" event={"ID":"847cd3c3-7ce0-4214-be1a-b1084c9eecae","Type":"ContainerDied","Data":"49c2d9b51f9f99abb825fa848259cd5f39a006a71042292e81aad7dc2f4df8fa"} Dec 02 10:56:41 crc kubenswrapper[4711]: I1202 10:56:41.621194 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcr96" event={"ID":"847cd3c3-7ce0-4214-be1a-b1084c9eecae","Type":"ContainerStarted","Data":"73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d"} Dec 02 10:56:46 crc kubenswrapper[4711]: I1202 10:56:46.289732 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:46 crc kubenswrapper[4711]: I1202 10:56:46.290426 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:47 crc kubenswrapper[4711]: I1202 10:56:47.344145 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vcr96" podUID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" containerName="registry-server" probeResult="failure" output=< Dec 02 10:56:47 crc kubenswrapper[4711]: timeout: failed to connect service ":50051" within 1s Dec 02 10:56:47 crc kubenswrapper[4711]: > Dec 02 10:56:56 crc kubenswrapper[4711]: I1202 10:56:56.365470 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:56 crc kubenswrapper[4711]: I1202 10:56:56.392693 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vcr96" podStartSLOduration=17.664394995 podStartE2EDuration="21.392652087s" podCreationTimestamp="2025-12-02 10:56:35 +0000 UTC" firstStartedPulling="2025-12-02 10:56:37.578325426 +0000 UTC m=+2587.287691883" lastFinishedPulling="2025-12-02 10:56:41.306582508 +0000 UTC m=+2591.015948975" observedRunningTime="2025-12-02 10:56:41.641541577 +0000 UTC m=+2591.350908034" watchObservedRunningTime="2025-12-02 10:56:56.392652087 +0000 UTC m=+2606.102018534" Dec 02 10:56:56 crc kubenswrapper[4711]: I1202 10:56:56.455746 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:56 crc kubenswrapper[4711]: I1202 10:56:56.613253 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vcr96"] Dec 02 10:56:57 crc kubenswrapper[4711]: I1202 10:56:57.808234 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vcr96" podUID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" containerName="registry-server" containerID="cri-o://73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d" gracePeriod=2 Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.276197 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.461159 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847cd3c3-7ce0-4214-be1a-b1084c9eecae-utilities\") pod \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\" (UID: \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\") " Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.461576 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847cd3c3-7ce0-4214-be1a-b1084c9eecae-catalog-content\") pod \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\" (UID: \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\") " Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.461596 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfzr8\" (UniqueName: \"kubernetes.io/projected/847cd3c3-7ce0-4214-be1a-b1084c9eecae-kube-api-access-xfzr8\") pod \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\" (UID: \"847cd3c3-7ce0-4214-be1a-b1084c9eecae\") " Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.462040 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847cd3c3-7ce0-4214-be1a-b1084c9eecae-utilities" (OuterVolumeSpecName: "utilities") pod "847cd3c3-7ce0-4214-be1a-b1084c9eecae" (UID: "847cd3c3-7ce0-4214-be1a-b1084c9eecae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.466831 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847cd3c3-7ce0-4214-be1a-b1084c9eecae-kube-api-access-xfzr8" (OuterVolumeSpecName: "kube-api-access-xfzr8") pod "847cd3c3-7ce0-4214-be1a-b1084c9eecae" (UID: "847cd3c3-7ce0-4214-be1a-b1084c9eecae"). InnerVolumeSpecName "kube-api-access-xfzr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.564111 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847cd3c3-7ce0-4214-be1a-b1084c9eecae-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.564142 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfzr8\" (UniqueName: \"kubernetes.io/projected/847cd3c3-7ce0-4214-be1a-b1084c9eecae-kube-api-access-xfzr8\") on node \"crc\" DevicePath \"\"" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.588347 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847cd3c3-7ce0-4214-be1a-b1084c9eecae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "847cd3c3-7ce0-4214-be1a-b1084c9eecae" (UID: "847cd3c3-7ce0-4214-be1a-b1084c9eecae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.669891 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847cd3c3-7ce0-4214-be1a-b1084c9eecae-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.823297 4711 generic.go:334] "Generic (PLEG): container finished" podID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" containerID="73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d" exitCode=0 Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.823383 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcr96" event={"ID":"847cd3c3-7ce0-4214-be1a-b1084c9eecae","Type":"ContainerDied","Data":"73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d"} Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.823448 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcr96" event={"ID":"847cd3c3-7ce0-4214-be1a-b1084c9eecae","Type":"ContainerDied","Data":"8bb61bbb15d52df19c98766081a24d0ff47b95dd369c4d27d5c39e9325dba03f"} Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.823471 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vcr96" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.823564 4711 scope.go:117] "RemoveContainer" containerID="73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.848151 4711 scope.go:117] "RemoveContainer" containerID="49c2d9b51f9f99abb825fa848259cd5f39a006a71042292e81aad7dc2f4df8fa" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.871437 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vcr96"] Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.878695 4711 scope.go:117] "RemoveContainer" containerID="6c1294a4b35607f14d9984c94b9ad492054933396f5f1a83daec081e4208efad" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.889553 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vcr96"] Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.926879 4711 scope.go:117] "RemoveContainer" containerID="73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d" Dec 02 10:56:58 crc kubenswrapper[4711]: E1202 10:56:58.927351 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d\": container with ID starting with 73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d not found: ID does not exist" containerID="73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.927390 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d"} err="failed to get container status \"73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d\": rpc error: code = NotFound desc = could not find container \"73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d\": container with ID starting with 73bc9fe6f5e449d5c9bd35d9a2ea8e3b750666862e1e67af7c52cf320cfe564d not found: ID does not exist" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.927410 4711 scope.go:117] "RemoveContainer" containerID="49c2d9b51f9f99abb825fa848259cd5f39a006a71042292e81aad7dc2f4df8fa" Dec 02 10:56:58 crc kubenswrapper[4711]: E1202 10:56:58.927869 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49c2d9b51f9f99abb825fa848259cd5f39a006a71042292e81aad7dc2f4df8fa\": container with ID starting with 49c2d9b51f9f99abb825fa848259cd5f39a006a71042292e81aad7dc2f4df8fa not found: ID does not exist" containerID="49c2d9b51f9f99abb825fa848259cd5f39a006a71042292e81aad7dc2f4df8fa" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.927908 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c2d9b51f9f99abb825fa848259cd5f39a006a71042292e81aad7dc2f4df8fa"} err="failed to get container status \"49c2d9b51f9f99abb825fa848259cd5f39a006a71042292e81aad7dc2f4df8fa\": rpc error: code = NotFound desc = could not find container \"49c2d9b51f9f99abb825fa848259cd5f39a006a71042292e81aad7dc2f4df8fa\": container with ID starting with 49c2d9b51f9f99abb825fa848259cd5f39a006a71042292e81aad7dc2f4df8fa not found: ID does not exist" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.927921 4711 scope.go:117] "RemoveContainer" containerID="6c1294a4b35607f14d9984c94b9ad492054933396f5f1a83daec081e4208efad" Dec 02 10:56:58 crc kubenswrapper[4711]: E1202 10:56:58.928420 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1294a4b35607f14d9984c94b9ad492054933396f5f1a83daec081e4208efad\": container with ID starting with 6c1294a4b35607f14d9984c94b9ad492054933396f5f1a83daec081e4208efad not found: ID does not exist" containerID="6c1294a4b35607f14d9984c94b9ad492054933396f5f1a83daec081e4208efad" Dec 02 10:56:58 crc kubenswrapper[4711]: I1202 10:56:58.928459 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1294a4b35607f14d9984c94b9ad492054933396f5f1a83daec081e4208efad"} err="failed to get container status \"6c1294a4b35607f14d9984c94b9ad492054933396f5f1a83daec081e4208efad\": rpc error: code = NotFound desc = could not find container \"6c1294a4b35607f14d9984c94b9ad492054933396f5f1a83daec081e4208efad\": container with ID starting with 6c1294a4b35607f14d9984c94b9ad492054933396f5f1a83daec081e4208efad not found: ID does not exist" Dec 02 10:56:59 crc kubenswrapper[4711]: I1202 10:56:59.089938 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" path="/var/lib/kubelet/pods/847cd3c3-7ce0-4214-be1a-b1084c9eecae/volumes" Dec 02 10:57:55 crc kubenswrapper[4711]: I1202 10:57:55.373204 4711 generic.go:334] "Generic (PLEG): container finished" podID="5192ee19-472c-4f7c-b41d-4a11b518b045" containerID="63e355829381b2f268049bdc293c358d22e64949d412fd1987b85679684dfe03" exitCode=0 Dec 02 10:57:55 crc kubenswrapper[4711]: I1202 10:57:55.373311 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" event={"ID":"5192ee19-472c-4f7c-b41d-4a11b518b045","Type":"ContainerDied","Data":"63e355829381b2f268049bdc293c358d22e64949d412fd1987b85679684dfe03"} Dec 02 10:57:56 crc kubenswrapper[4711]: I1202 10:57:56.860750 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.011880 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l77br\" (UniqueName: \"kubernetes.io/projected/5192ee19-472c-4f7c-b41d-4a11b518b045-kube-api-access-l77br\") pod \"5192ee19-472c-4f7c-b41d-4a11b518b045\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.011916 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-1\") pod \"5192ee19-472c-4f7c-b41d-4a11b518b045\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.011981 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-telemetry-combined-ca-bundle\") pod \"5192ee19-472c-4f7c-b41d-4a11b518b045\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.012077 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-inventory\") pod \"5192ee19-472c-4f7c-b41d-4a11b518b045\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.012156 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-2\") pod \"5192ee19-472c-4f7c-b41d-4a11b518b045\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.012228 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-0\") pod \"5192ee19-472c-4f7c-b41d-4a11b518b045\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.012247 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ssh-key\") pod \"5192ee19-472c-4f7c-b41d-4a11b518b045\" (UID: \"5192ee19-472c-4f7c-b41d-4a11b518b045\") " Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.018715 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5192ee19-472c-4f7c-b41d-4a11b518b045" (UID: "5192ee19-472c-4f7c-b41d-4a11b518b045"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.040356 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5192ee19-472c-4f7c-b41d-4a11b518b045-kube-api-access-l77br" (OuterVolumeSpecName: "kube-api-access-l77br") pod "5192ee19-472c-4f7c-b41d-4a11b518b045" (UID: "5192ee19-472c-4f7c-b41d-4a11b518b045"). InnerVolumeSpecName "kube-api-access-l77br". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.047068 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "5192ee19-472c-4f7c-b41d-4a11b518b045" (UID: "5192ee19-472c-4f7c-b41d-4a11b518b045"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.048902 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "5192ee19-472c-4f7c-b41d-4a11b518b045" (UID: "5192ee19-472c-4f7c-b41d-4a11b518b045"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.058436 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5192ee19-472c-4f7c-b41d-4a11b518b045" (UID: "5192ee19-472c-4f7c-b41d-4a11b518b045"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.059800 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "5192ee19-472c-4f7c-b41d-4a11b518b045" (UID: "5192ee19-472c-4f7c-b41d-4a11b518b045"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.062558 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-inventory" (OuterVolumeSpecName: "inventory") pod "5192ee19-472c-4f7c-b41d-4a11b518b045" (UID: "5192ee19-472c-4f7c-b41d-4a11b518b045"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.114413 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l77br\" (UniqueName: \"kubernetes.io/projected/5192ee19-472c-4f7c-b41d-4a11b518b045-kube-api-access-l77br\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.114466 4711 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.114480 4711 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.114492 4711 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.114505 4711 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.114517 4711 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.114529 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5192ee19-472c-4f7c-b41d-4a11b518b045-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.400413 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" event={"ID":"5192ee19-472c-4f7c-b41d-4a11b518b045","Type":"ContainerDied","Data":"27a771c30bc7edc8cce00b6970e7bae5124a797a34fc82ce8bb5a88db422c046"} Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.400470 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27a771c30bc7edc8cce00b6970e7bae5124a797a34fc82ce8bb5a88db422c046" Dec 02 10:57:57 crc kubenswrapper[4711]: I1202 10:57:57.400618 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5" Dec 02 10:58:00 crc kubenswrapper[4711]: E1202 10:58:00.382636 4711 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.249:40654->38.129.56.249:46835: write tcp 38.129.56.249:40654->38.129.56.249:46835: write: broken pipe Dec 02 10:58:22 crc kubenswrapper[4711]: I1202 10:58:22.586680 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:58:22 crc kubenswrapper[4711]: I1202 10:58:22.587773 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.810048 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9rpcn"] Dec 02 10:58:35 crc kubenswrapper[4711]: E1202 10:58:35.811008 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" containerName="registry-server" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.811039 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" containerName="registry-server" Dec 02 10:58:35 crc kubenswrapper[4711]: E1202 10:58:35.811062 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" containerName="extract-content" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.811070 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" containerName="extract-content" Dec 02 10:58:35 crc kubenswrapper[4711]: E1202 10:58:35.811088 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5192ee19-472c-4f7c-b41d-4a11b518b045" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.811099 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="5192ee19-472c-4f7c-b41d-4a11b518b045" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 02 10:58:35 crc kubenswrapper[4711]: E1202 10:58:35.811139 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" containerName="extract-utilities" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.811147 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" containerName="extract-utilities" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.811364 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="5192ee19-472c-4f7c-b41d-4a11b518b045" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.811395 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="847cd3c3-7ce0-4214-be1a-b1084c9eecae" containerName="registry-server" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.813163 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.833001 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rpcn"] Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.889508 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24slm\" (UniqueName: \"kubernetes.io/projected/b6b8bdda-a0c6-4082-a64a-b064e50816bf-kube-api-access-24slm\") pod \"redhat-marketplace-9rpcn\" (UID: \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\") " pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.889694 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b8bdda-a0c6-4082-a64a-b064e50816bf-utilities\") pod \"redhat-marketplace-9rpcn\" (UID: \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\") " pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.889766 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b8bdda-a0c6-4082-a64a-b064e50816bf-catalog-content\") pod \"redhat-marketplace-9rpcn\" (UID: \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\") " pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.991651 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24slm\" (UniqueName: \"kubernetes.io/projected/b6b8bdda-a0c6-4082-a64a-b064e50816bf-kube-api-access-24slm\") pod \"redhat-marketplace-9rpcn\" (UID: \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\") " pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.991764 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b8bdda-a0c6-4082-a64a-b064e50816bf-utilities\") pod \"redhat-marketplace-9rpcn\" (UID: \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\") " pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.991808 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b8bdda-a0c6-4082-a64a-b064e50816bf-catalog-content\") pod \"redhat-marketplace-9rpcn\" (UID: \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\") " pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.992376 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b8bdda-a0c6-4082-a64a-b064e50816bf-utilities\") pod \"redhat-marketplace-9rpcn\" (UID: \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\") " pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:35 crc kubenswrapper[4711]: I1202 10:58:35.992417 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b8bdda-a0c6-4082-a64a-b064e50816bf-catalog-content\") pod \"redhat-marketplace-9rpcn\" (UID: \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\") " pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:36 crc kubenswrapper[4711]: I1202 10:58:36.011930 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24slm\" (UniqueName: \"kubernetes.io/projected/b6b8bdda-a0c6-4082-a64a-b064e50816bf-kube-api-access-24slm\") pod \"redhat-marketplace-9rpcn\" (UID: \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\") " pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:36 crc kubenswrapper[4711]: I1202 10:58:36.139023 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:36 crc kubenswrapper[4711]: I1202 10:58:36.636172 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rpcn"] Dec 02 10:58:36 crc kubenswrapper[4711]: I1202 10:58:36.785188 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rpcn" event={"ID":"b6b8bdda-a0c6-4082-a64a-b064e50816bf","Type":"ContainerStarted","Data":"92b328b41f3f6090b600770a1d19cd20fcf1880fb1dfb0d8be9ce8654be782ca"} Dec 02 10:58:37 crc kubenswrapper[4711]: I1202 10:58:37.801053 4711 generic.go:334] "Generic (PLEG): container finished" podID="b6b8bdda-a0c6-4082-a64a-b064e50816bf" containerID="793decdffde403d0c85c1bc64396731bae2f08977e063611dc77ce18df8b39a5" exitCode=0 Dec 02 10:58:37 crc kubenswrapper[4711]: I1202 10:58:37.801130 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rpcn" event={"ID":"b6b8bdda-a0c6-4082-a64a-b064e50816bf","Type":"ContainerDied","Data":"793decdffde403d0c85c1bc64396731bae2f08977e063611dc77ce18df8b39a5"} Dec 02 10:58:37 crc kubenswrapper[4711]: I1202 10:58:37.803922 4711 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.284884 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.287336 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.290859 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.290872 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bmhll" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.291314 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.291443 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.293935 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.353686 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/725581bd-6264-4ca6-b1fa-126c3c50800b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.354043 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/725581bd-6264-4ca6-b1fa-126c3c50800b-config-data\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.354149 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.455898 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.456072 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/725581bd-6264-4ca6-b1fa-126c3c50800b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.456116 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2rrx\" (UniqueName: \"kubernetes.io/projected/725581bd-6264-4ca6-b1fa-126c3c50800b-kube-api-access-j2rrx\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.456176 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/725581bd-6264-4ca6-b1fa-126c3c50800b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.456240 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.456265 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.456296 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.456386 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/725581bd-6264-4ca6-b1fa-126c3c50800b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.456422 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/725581bd-6264-4ca6-b1fa-126c3c50800b-config-data\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.457369 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/725581bd-6264-4ca6-b1fa-126c3c50800b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.457623 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/725581bd-6264-4ca6-b1fa-126c3c50800b-config-data\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.463182 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.557888 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/725581bd-6264-4ca6-b1fa-126c3c50800b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.558283 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2rrx\" (UniqueName: \"kubernetes.io/projected/725581bd-6264-4ca6-b1fa-126c3c50800b-kube-api-access-j2rrx\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.558328 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/725581bd-6264-4ca6-b1fa-126c3c50800b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.558377 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.558395 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.558417 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.558567 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/725581bd-6264-4ca6-b1fa-126c3c50800b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.558790 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.559002 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/725581bd-6264-4ca6-b1fa-126c3c50800b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.563047 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.564330 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.585751 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2rrx\" (UniqueName: \"kubernetes.io/projected/725581bd-6264-4ca6-b1fa-126c3c50800b-kube-api-access-j2rrx\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.588152 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.611210 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.861136 4711 generic.go:334] "Generic (PLEG): container finished" podID="b6b8bdda-a0c6-4082-a64a-b064e50816bf" containerID="fa460490e47d356dc9616ec0229e4c08b1bd90e5edc612e2ec7f4675ddf6753f" exitCode=0 Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.861497 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rpcn" event={"ID":"b6b8bdda-a0c6-4082-a64a-b064e50816bf","Type":"ContainerDied","Data":"fa460490e47d356dc9616ec0229e4c08b1bd90e5edc612e2ec7f4675ddf6753f"} Dec 02 10:58:39 crc kubenswrapper[4711]: I1202 10:58:39.988695 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 02 10:58:40 crc kubenswrapper[4711]: I1202 10:58:40.878533 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rpcn" event={"ID":"b6b8bdda-a0c6-4082-a64a-b064e50816bf","Type":"ContainerStarted","Data":"13d322d86cfa4b36ef187fa96ebc73d6bfe43f3a5a90a7726c28a5511833e7ca"} Dec 02 10:58:40 crc kubenswrapper[4711]: I1202 10:58:40.881691 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"725581bd-6264-4ca6-b1fa-126c3c50800b","Type":"ContainerStarted","Data":"a12123d320a2f2d0c18c9e9ba5d92707969eb8fb921fc95eb684715bb5367b32"} Dec 02 10:58:40 crc kubenswrapper[4711]: I1202 10:58:40.915888 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9rpcn" podStartSLOduration=3.361103325 podStartE2EDuration="5.915829333s" podCreationTimestamp="2025-12-02 10:58:35 +0000 UTC" firstStartedPulling="2025-12-02 10:58:37.803467987 +0000 UTC m=+2707.512834434" lastFinishedPulling="2025-12-02 10:58:40.358193975 +0000 UTC m=+2710.067560442" observedRunningTime="2025-12-02 10:58:40.904860879 +0000 UTC m=+2710.614227326" watchObservedRunningTime="2025-12-02 10:58:40.915829333 +0000 UTC m=+2710.625195780" Dec 02 10:58:46 crc kubenswrapper[4711]: I1202 10:58:46.139573 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:46 crc kubenswrapper[4711]: I1202 10:58:46.140037 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:46 crc kubenswrapper[4711]: I1202 10:58:46.208624 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:47 crc kubenswrapper[4711]: I1202 10:58:46.998606 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:47 crc kubenswrapper[4711]: I1202 10:58:47.045141 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rpcn"] Dec 02 10:58:48 crc kubenswrapper[4711]: I1202 10:58:48.951322 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9rpcn" podUID="b6b8bdda-a0c6-4082-a64a-b064e50816bf" containerName="registry-server" containerID="cri-o://13d322d86cfa4b36ef187fa96ebc73d6bfe43f3a5a90a7726c28a5511833e7ca" gracePeriod=2 Dec 02 10:58:49 crc kubenswrapper[4711]: E1202 10:58:49.218289 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6b8bdda_a0c6_4082_a64a_b064e50816bf.slice/crio-13d322d86cfa4b36ef187fa96ebc73d6bfe43f3a5a90a7726c28a5511833e7ca.scope\": RecentStats: unable to find data in memory cache]" Dec 02 10:58:50 crc kubenswrapper[4711]: I1202 10:58:50.974765 4711 generic.go:334] "Generic (PLEG): container finished" podID="b6b8bdda-a0c6-4082-a64a-b064e50816bf" containerID="13d322d86cfa4b36ef187fa96ebc73d6bfe43f3a5a90a7726c28a5511833e7ca" exitCode=0 Dec 02 10:58:50 crc kubenswrapper[4711]: I1202 10:58:50.974840 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rpcn" event={"ID":"b6b8bdda-a0c6-4082-a64a-b064e50816bf","Type":"ContainerDied","Data":"13d322d86cfa4b36ef187fa96ebc73d6bfe43f3a5a90a7726c28a5511833e7ca"} Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.785637 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.869038 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b8bdda-a0c6-4082-a64a-b064e50816bf-utilities\") pod \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\" (UID: \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\") " Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.869164 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24slm\" (UniqueName: \"kubernetes.io/projected/b6b8bdda-a0c6-4082-a64a-b064e50816bf-kube-api-access-24slm\") pod \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\" (UID: \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\") " Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.870432 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b8bdda-a0c6-4082-a64a-b064e50816bf-utilities" (OuterVolumeSpecName: "utilities") pod "b6b8bdda-a0c6-4082-a64a-b064e50816bf" (UID: "b6b8bdda-a0c6-4082-a64a-b064e50816bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.869277 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b8bdda-a0c6-4082-a64a-b064e50816bf-catalog-content\") pod \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\" (UID: \"b6b8bdda-a0c6-4082-a64a-b064e50816bf\") " Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.871872 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b8bdda-a0c6-4082-a64a-b064e50816bf-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.881161 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b8bdda-a0c6-4082-a64a-b064e50816bf-kube-api-access-24slm" (OuterVolumeSpecName: "kube-api-access-24slm") pod "b6b8bdda-a0c6-4082-a64a-b064e50816bf" (UID: "b6b8bdda-a0c6-4082-a64a-b064e50816bf"). InnerVolumeSpecName "kube-api-access-24slm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.900175 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b8bdda-a0c6-4082-a64a-b064e50816bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6b8bdda-a0c6-4082-a64a-b064e50816bf" (UID: "b6b8bdda-a0c6-4082-a64a-b064e50816bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.973578 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24slm\" (UniqueName: \"kubernetes.io/projected/b6b8bdda-a0c6-4082-a64a-b064e50816bf-kube-api-access-24slm\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.973612 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b8bdda-a0c6-4082-a64a-b064e50816bf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.987653 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rpcn" event={"ID":"b6b8bdda-a0c6-4082-a64a-b064e50816bf","Type":"ContainerDied","Data":"92b328b41f3f6090b600770a1d19cd20fcf1880fb1dfb0d8be9ce8654be782ca"} Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.987703 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rpcn" Dec 02 10:58:51 crc kubenswrapper[4711]: I1202 10:58:51.987709 4711 scope.go:117] "RemoveContainer" containerID="13d322d86cfa4b36ef187fa96ebc73d6bfe43f3a5a90a7726c28a5511833e7ca" Dec 02 10:58:52 crc kubenswrapper[4711]: I1202 10:58:52.022592 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rpcn"] Dec 02 10:58:52 crc kubenswrapper[4711]: I1202 10:58:52.032794 4711 scope.go:117] "RemoveContainer" containerID="fa460490e47d356dc9616ec0229e4c08b1bd90e5edc612e2ec7f4675ddf6753f" Dec 02 10:58:52 crc kubenswrapper[4711]: I1202 10:58:52.033420 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rpcn"] Dec 02 10:58:52 crc kubenswrapper[4711]: I1202 10:58:52.060070 4711 scope.go:117] "RemoveContainer" containerID="793decdffde403d0c85c1bc64396731bae2f08977e063611dc77ce18df8b39a5" Dec 02 10:58:52 crc kubenswrapper[4711]: I1202 10:58:52.585474 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:58:52 crc kubenswrapper[4711]: I1202 10:58:52.585530 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:58:53 crc kubenswrapper[4711]: I1202 10:58:53.092756 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b8bdda-a0c6-4082-a64a-b064e50816bf" path="/var/lib/kubelet/pods/b6b8bdda-a0c6-4082-a64a-b064e50816bf/volumes" Dec 02 10:59:10 crc kubenswrapper[4711]: E1202 10:59:10.454484 4711 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 02 10:59:10 crc kubenswrapper[4711]: E1202 10:59:10.455562 4711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2rrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(725581bd-6264-4ca6-b1fa-126c3c50800b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:59:10 crc kubenswrapper[4711]: E1202 10:59:10.457314 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="725581bd-6264-4ca6-b1fa-126c3c50800b" Dec 02 10:59:11 crc kubenswrapper[4711]: E1202 10:59:11.186240 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="725581bd-6264-4ca6-b1fa-126c3c50800b" Dec 02 10:59:22 crc kubenswrapper[4711]: I1202 10:59:22.610451 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:59:22 crc kubenswrapper[4711]: I1202 10:59:22.611291 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:59:22 crc kubenswrapper[4711]: I1202 10:59:22.611394 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 10:59:22 crc kubenswrapper[4711]: I1202 10:59:22.612202 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce1d82b527eae7c31f9a034a481bb3607adffd804c9c681434ee7921132c3317"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:59:22 crc kubenswrapper[4711]: I1202 10:59:22.612306 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://ce1d82b527eae7c31f9a034a481bb3607adffd804c9c681434ee7921132c3317" gracePeriod=600 Dec 02 10:59:23 crc kubenswrapper[4711]: I1202 10:59:23.316897 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="ce1d82b527eae7c31f9a034a481bb3607adffd804c9c681434ee7921132c3317" exitCode=0 Dec 02 10:59:23 crc kubenswrapper[4711]: I1202 10:59:23.317097 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"ce1d82b527eae7c31f9a034a481bb3607adffd804c9c681434ee7921132c3317"} Dec 02 10:59:23 crc kubenswrapper[4711]: I1202 10:59:23.317417 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d"} Dec 02 10:59:23 crc kubenswrapper[4711]: I1202 10:59:23.317454 4711 scope.go:117] "RemoveContainer" containerID="6fce46b01bba88e4d2d001b31f3147b3566b1a402437df71dcb8cffd2ff873ec" Dec 02 10:59:25 crc kubenswrapper[4711]: I1202 10:59:25.346691 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"725581bd-6264-4ca6-b1fa-126c3c50800b","Type":"ContainerStarted","Data":"c63722886c96dd1326b26496648f05cb1d03d4f44c013f31c5950d7f285b9d77"} Dec 02 10:59:25 crc kubenswrapper[4711]: I1202 10:59:25.370159 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.788582983 podStartE2EDuration="47.370112413s" podCreationTimestamp="2025-12-02 10:58:38 +0000 UTC" firstStartedPulling="2025-12-02 10:58:40.007666194 +0000 UTC m=+2709.717032641" lastFinishedPulling="2025-12-02 10:59:23.589195604 +0000 UTC m=+2753.298562071" observedRunningTime="2025-12-02 10:59:25.364026859 +0000 UTC m=+2755.073393316" watchObservedRunningTime="2025-12-02 10:59:25.370112413 +0000 UTC m=+2755.079478860" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.146768 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686"] Dec 02 11:00:00 crc kubenswrapper[4711]: E1202 11:00:00.149408 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b8bdda-a0c6-4082-a64a-b064e50816bf" containerName="extract-utilities" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.149447 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b8bdda-a0c6-4082-a64a-b064e50816bf" containerName="extract-utilities" Dec 02 11:00:00 crc kubenswrapper[4711]: E1202 11:00:00.149487 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b8bdda-a0c6-4082-a64a-b064e50816bf" containerName="extract-content" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.149497 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b8bdda-a0c6-4082-a64a-b064e50816bf" containerName="extract-content" Dec 02 11:00:00 crc kubenswrapper[4711]: E1202 11:00:00.149544 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b8bdda-a0c6-4082-a64a-b064e50816bf" containerName="registry-server" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.149555 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b8bdda-a0c6-4082-a64a-b064e50816bf" containerName="registry-server" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.149769 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b8bdda-a0c6-4082-a64a-b064e50816bf" containerName="registry-server" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.150646 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.153569 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.153572 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.162930 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686"] Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.227705 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fd54\" (UniqueName: \"kubernetes.io/projected/2850fec8-a81e-45ec-a7de-4722d8fd3b10-kube-api-access-6fd54\") pod \"collect-profiles-29411220-vk686\" (UID: \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.227784 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2850fec8-a81e-45ec-a7de-4722d8fd3b10-secret-volume\") pod \"collect-profiles-29411220-vk686\" (UID: \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.228104 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2850fec8-a81e-45ec-a7de-4722d8fd3b10-config-volume\") pod \"collect-profiles-29411220-vk686\" (UID: \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.330043 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2850fec8-a81e-45ec-a7de-4722d8fd3b10-config-volume\") pod \"collect-profiles-29411220-vk686\" (UID: \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.330165 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fd54\" (UniqueName: \"kubernetes.io/projected/2850fec8-a81e-45ec-a7de-4722d8fd3b10-kube-api-access-6fd54\") pod \"collect-profiles-29411220-vk686\" (UID: \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.330220 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2850fec8-a81e-45ec-a7de-4722d8fd3b10-secret-volume\") pod \"collect-profiles-29411220-vk686\" (UID: \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.331077 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2850fec8-a81e-45ec-a7de-4722d8fd3b10-config-volume\") pod \"collect-profiles-29411220-vk686\" (UID: \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.336216 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2850fec8-a81e-45ec-a7de-4722d8fd3b10-secret-volume\") pod \"collect-profiles-29411220-vk686\" (UID: \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.349103 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fd54\" (UniqueName: \"kubernetes.io/projected/2850fec8-a81e-45ec-a7de-4722d8fd3b10-kube-api-access-6fd54\") pod \"collect-profiles-29411220-vk686\" (UID: \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.497110 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:00 crc kubenswrapper[4711]: I1202 11:00:00.910378 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686"] Dec 02 11:00:01 crc kubenswrapper[4711]: I1202 11:00:01.844888 4711 generic.go:334] "Generic (PLEG): container finished" podID="2850fec8-a81e-45ec-a7de-4722d8fd3b10" containerID="1d7a255dee2e25f81cc60069fbb96d441befea8266ae356a92312bb2f133dfde" exitCode=0 Dec 02 11:00:01 crc kubenswrapper[4711]: I1202 11:00:01.845033 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" event={"ID":"2850fec8-a81e-45ec-a7de-4722d8fd3b10","Type":"ContainerDied","Data":"1d7a255dee2e25f81cc60069fbb96d441befea8266ae356a92312bb2f133dfde"} Dec 02 11:00:01 crc kubenswrapper[4711]: I1202 11:00:01.845454 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" event={"ID":"2850fec8-a81e-45ec-a7de-4722d8fd3b10","Type":"ContainerStarted","Data":"3d90eb0b899aac0394655302e6e7f294650bfcffca02c2ee145713bf78928370"} Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.224181 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.390983 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2850fec8-a81e-45ec-a7de-4722d8fd3b10-config-volume" (OuterVolumeSpecName: "config-volume") pod "2850fec8-a81e-45ec-a7de-4722d8fd3b10" (UID: "2850fec8-a81e-45ec-a7de-4722d8fd3b10"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.391052 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2850fec8-a81e-45ec-a7de-4722d8fd3b10-config-volume\") pod \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\" (UID: \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\") " Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.391212 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fd54\" (UniqueName: \"kubernetes.io/projected/2850fec8-a81e-45ec-a7de-4722d8fd3b10-kube-api-access-6fd54\") pod \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\" (UID: \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\") " Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.392339 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2850fec8-a81e-45ec-a7de-4722d8fd3b10-secret-volume\") pod \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\" (UID: \"2850fec8-a81e-45ec-a7de-4722d8fd3b10\") " Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.393077 4711 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2850fec8-a81e-45ec-a7de-4722d8fd3b10-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.399509 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2850fec8-a81e-45ec-a7de-4722d8fd3b10-kube-api-access-6fd54" (OuterVolumeSpecName: "kube-api-access-6fd54") pod "2850fec8-a81e-45ec-a7de-4722d8fd3b10" (UID: "2850fec8-a81e-45ec-a7de-4722d8fd3b10"). InnerVolumeSpecName "kube-api-access-6fd54". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.407645 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2850fec8-a81e-45ec-a7de-4722d8fd3b10-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2850fec8-a81e-45ec-a7de-4722d8fd3b10" (UID: "2850fec8-a81e-45ec-a7de-4722d8fd3b10"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.495248 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fd54\" (UniqueName: \"kubernetes.io/projected/2850fec8-a81e-45ec-a7de-4722d8fd3b10-kube-api-access-6fd54\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.495305 4711 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2850fec8-a81e-45ec-a7de-4722d8fd3b10-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.863825 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" event={"ID":"2850fec8-a81e-45ec-a7de-4722d8fd3b10","Type":"ContainerDied","Data":"3d90eb0b899aac0394655302e6e7f294650bfcffca02c2ee145713bf78928370"} Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.863900 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d90eb0b899aac0394655302e6e7f294650bfcffca02c2ee145713bf78928370" Dec 02 11:00:03 crc kubenswrapper[4711]: I1202 11:00:03.863902 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-vk686" Dec 02 11:00:04 crc kubenswrapper[4711]: I1202 11:00:04.312595 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng"] Dec 02 11:00:04 crc kubenswrapper[4711]: I1202 11:00:04.324867 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411175-prrng"] Dec 02 11:00:05 crc kubenswrapper[4711]: I1202 11:00:05.095301 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84dbad5b-6e48-48a8-bbe1-76f6e92eb785" path="/var/lib/kubelet/pods/84dbad5b-6e48-48a8-bbe1-76f6e92eb785/volumes" Dec 02 11:00:13 crc kubenswrapper[4711]: I1202 11:00:13.906828 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jvgw9"] Dec 02 11:00:13 crc kubenswrapper[4711]: E1202 11:00:13.907745 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2850fec8-a81e-45ec-a7de-4722d8fd3b10" containerName="collect-profiles" Dec 02 11:00:13 crc kubenswrapper[4711]: I1202 11:00:13.907767 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="2850fec8-a81e-45ec-a7de-4722d8fd3b10" containerName="collect-profiles" Dec 02 11:00:13 crc kubenswrapper[4711]: I1202 11:00:13.908052 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="2850fec8-a81e-45ec-a7de-4722d8fd3b10" containerName="collect-profiles" Dec 02 11:00:13 crc kubenswrapper[4711]: I1202 11:00:13.909678 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:13 crc kubenswrapper[4711]: I1202 11:00:13.919737 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jvgw9"] Dec 02 11:00:14 crc kubenswrapper[4711]: I1202 11:00:14.005919 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-utilities\") pod \"certified-operators-jvgw9\" (UID: \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\") " pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:14 crc kubenswrapper[4711]: I1202 11:00:14.006176 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94snq\" (UniqueName: \"kubernetes.io/projected/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-kube-api-access-94snq\") pod \"certified-operators-jvgw9\" (UID: \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\") " pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:14 crc kubenswrapper[4711]: I1202 11:00:14.006515 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-catalog-content\") pod \"certified-operators-jvgw9\" (UID: \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\") " pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:14 crc kubenswrapper[4711]: I1202 11:00:14.108749 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-utilities\") pod \"certified-operators-jvgw9\" (UID: \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\") " pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:14 crc kubenswrapper[4711]: I1202 11:00:14.109118 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94snq\" (UniqueName: \"kubernetes.io/projected/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-kube-api-access-94snq\") pod \"certified-operators-jvgw9\" (UID: \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\") " pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:14 crc kubenswrapper[4711]: I1202 11:00:14.109474 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-catalog-content\") pod \"certified-operators-jvgw9\" (UID: \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\") " pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:14 crc kubenswrapper[4711]: I1202 11:00:14.109968 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-catalog-content\") pod \"certified-operators-jvgw9\" (UID: \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\") " pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:14 crc kubenswrapper[4711]: I1202 11:00:14.110015 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-utilities\") pod \"certified-operators-jvgw9\" (UID: \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\") " pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:14 crc kubenswrapper[4711]: I1202 11:00:14.140628 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94snq\" (UniqueName: \"kubernetes.io/projected/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-kube-api-access-94snq\") pod \"certified-operators-jvgw9\" (UID: \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\") " pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:14 crc kubenswrapper[4711]: I1202 11:00:14.243157 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:14 crc kubenswrapper[4711]: I1202 11:00:14.760519 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jvgw9"] Dec 02 11:00:15 crc kubenswrapper[4711]: I1202 11:00:15.000714 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvgw9" event={"ID":"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93","Type":"ContainerStarted","Data":"3980e1becb89fb3d7aec65b7f18aca1950fc9ba13857d03c8038ee4c06b30f54"} Dec 02 11:00:15 crc kubenswrapper[4711]: I1202 11:00:15.001212 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvgw9" event={"ID":"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93","Type":"ContainerStarted","Data":"74612187e57ffc528912506a0a32e772eff8d598a57823a7d21323fbeb080e15"} Dec 02 11:00:16 crc kubenswrapper[4711]: I1202 11:00:16.012779 4711 generic.go:334] "Generic (PLEG): container finished" podID="1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" containerID="3980e1becb89fb3d7aec65b7f18aca1950fc9ba13857d03c8038ee4c06b30f54" exitCode=0 Dec 02 11:00:16 crc kubenswrapper[4711]: I1202 11:00:16.012847 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvgw9" event={"ID":"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93","Type":"ContainerDied","Data":"3980e1becb89fb3d7aec65b7f18aca1950fc9ba13857d03c8038ee4c06b30f54"} Dec 02 11:00:18 crc kubenswrapper[4711]: I1202 11:00:18.037296 4711 generic.go:334] "Generic (PLEG): container finished" podID="1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" containerID="a21865b78200b8132fa0d0555fe6e4443692624eb6d6002b5990a28c39dcdcdf" exitCode=0 Dec 02 11:00:18 crc kubenswrapper[4711]: I1202 11:00:18.037372 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvgw9" event={"ID":"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93","Type":"ContainerDied","Data":"a21865b78200b8132fa0d0555fe6e4443692624eb6d6002b5990a28c39dcdcdf"} Dec 02 11:00:19 crc kubenswrapper[4711]: I1202 11:00:19.049544 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvgw9" event={"ID":"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93","Type":"ContainerStarted","Data":"477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c"} Dec 02 11:00:19 crc kubenswrapper[4711]: I1202 11:00:19.072774 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jvgw9" podStartSLOduration=3.524483137 podStartE2EDuration="6.072747644s" podCreationTimestamp="2025-12-02 11:00:13 +0000 UTC" firstStartedPulling="2025-12-02 11:00:16.014881821 +0000 UTC m=+2805.724248268" lastFinishedPulling="2025-12-02 11:00:18.563146328 +0000 UTC m=+2808.272512775" observedRunningTime="2025-12-02 11:00:19.070576635 +0000 UTC m=+2808.779943092" watchObservedRunningTime="2025-12-02 11:00:19.072747644 +0000 UTC m=+2808.782114111" Dec 02 11:00:24 crc kubenswrapper[4711]: I1202 11:00:24.244306 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:24 crc kubenswrapper[4711]: I1202 11:00:24.244796 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:24 crc kubenswrapper[4711]: I1202 11:00:24.302810 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:25 crc kubenswrapper[4711]: I1202 11:00:25.185794 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:25 crc kubenswrapper[4711]: I1202 11:00:25.236414 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jvgw9"] Dec 02 11:00:27 crc kubenswrapper[4711]: I1202 11:00:27.148659 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jvgw9" podUID="1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" containerName="registry-server" containerID="cri-o://477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c" gracePeriod=2 Dec 02 11:00:27 crc kubenswrapper[4711]: I1202 11:00:27.706042 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:27 crc kubenswrapper[4711]: I1202 11:00:27.829463 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94snq\" (UniqueName: \"kubernetes.io/projected/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-kube-api-access-94snq\") pod \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\" (UID: \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\") " Dec 02 11:00:27 crc kubenswrapper[4711]: I1202 11:00:27.829668 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-utilities\") pod \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\" (UID: \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\") " Dec 02 11:00:27 crc kubenswrapper[4711]: I1202 11:00:27.829711 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-catalog-content\") pod \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\" (UID: \"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93\") " Dec 02 11:00:27 crc kubenswrapper[4711]: I1202 11:00:27.831206 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-utilities" (OuterVolumeSpecName: "utilities") pod "1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" (UID: "1dfde7de-74c7-49cf-a4f9-eb3bf45fce93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:00:27 crc kubenswrapper[4711]: I1202 11:00:27.838373 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-kube-api-access-94snq" (OuterVolumeSpecName: "kube-api-access-94snq") pod "1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" (UID: "1dfde7de-74c7-49cf-a4f9-eb3bf45fce93"). InnerVolumeSpecName "kube-api-access-94snq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:00:27 crc kubenswrapper[4711]: I1202 11:00:27.902842 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" (UID: "1dfde7de-74c7-49cf-a4f9-eb3bf45fce93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:00:27 crc kubenswrapper[4711]: I1202 11:00:27.931631 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94snq\" (UniqueName: \"kubernetes.io/projected/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-kube-api-access-94snq\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:27 crc kubenswrapper[4711]: I1202 11:00:27.931670 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:27 crc kubenswrapper[4711]: I1202 11:00:27.932056 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.158867 4711 generic.go:334] "Generic (PLEG): container finished" podID="1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" containerID="477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c" exitCode=0 Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.158918 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvgw9" event={"ID":"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93","Type":"ContainerDied","Data":"477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c"} Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.158931 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvgw9" Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.158971 4711 scope.go:117] "RemoveContainer" containerID="477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c" Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.158945 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvgw9" event={"ID":"1dfde7de-74c7-49cf-a4f9-eb3bf45fce93","Type":"ContainerDied","Data":"74612187e57ffc528912506a0a32e772eff8d598a57823a7d21323fbeb080e15"} Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.202888 4711 scope.go:117] "RemoveContainer" containerID="a21865b78200b8132fa0d0555fe6e4443692624eb6d6002b5990a28c39dcdcdf" Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.212433 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jvgw9"] Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.226674 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jvgw9"] Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.239546 4711 scope.go:117] "RemoveContainer" containerID="3980e1becb89fb3d7aec65b7f18aca1950fc9ba13857d03c8038ee4c06b30f54" Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.296976 4711 scope.go:117] "RemoveContainer" containerID="477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c" Dec 02 11:00:28 crc kubenswrapper[4711]: E1202 11:00:28.297616 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c\": container with ID starting with 477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c not found: ID does not exist" containerID="477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c" Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.297656 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c"} err="failed to get container status \"477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c\": rpc error: code = NotFound desc = could not find container \"477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c\": container with ID starting with 477bca69a4e608ed6db7a2513d8ad1501d22bb402b87b854e7bbb205b1bf2f8c not found: ID does not exist" Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.297681 4711 scope.go:117] "RemoveContainer" containerID="a21865b78200b8132fa0d0555fe6e4443692624eb6d6002b5990a28c39dcdcdf" Dec 02 11:00:28 crc kubenswrapper[4711]: E1202 11:00:28.298239 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a21865b78200b8132fa0d0555fe6e4443692624eb6d6002b5990a28c39dcdcdf\": container with ID starting with a21865b78200b8132fa0d0555fe6e4443692624eb6d6002b5990a28c39dcdcdf not found: ID does not exist" containerID="a21865b78200b8132fa0d0555fe6e4443692624eb6d6002b5990a28c39dcdcdf" Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.298258 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21865b78200b8132fa0d0555fe6e4443692624eb6d6002b5990a28c39dcdcdf"} err="failed to get container status \"a21865b78200b8132fa0d0555fe6e4443692624eb6d6002b5990a28c39dcdcdf\": rpc error: code = NotFound desc = could not find container \"a21865b78200b8132fa0d0555fe6e4443692624eb6d6002b5990a28c39dcdcdf\": container with ID starting with a21865b78200b8132fa0d0555fe6e4443692624eb6d6002b5990a28c39dcdcdf not found: ID does not exist" Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.298272 4711 scope.go:117] "RemoveContainer" containerID="3980e1becb89fb3d7aec65b7f18aca1950fc9ba13857d03c8038ee4c06b30f54" Dec 02 11:00:28 crc kubenswrapper[4711]: E1202 11:00:28.298518 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3980e1becb89fb3d7aec65b7f18aca1950fc9ba13857d03c8038ee4c06b30f54\": container with ID starting with 3980e1becb89fb3d7aec65b7f18aca1950fc9ba13857d03c8038ee4c06b30f54 not found: ID does not exist" containerID="3980e1becb89fb3d7aec65b7f18aca1950fc9ba13857d03c8038ee4c06b30f54" Dec 02 11:00:28 crc kubenswrapper[4711]: I1202 11:00:28.298546 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3980e1becb89fb3d7aec65b7f18aca1950fc9ba13857d03c8038ee4c06b30f54"} err="failed to get container status \"3980e1becb89fb3d7aec65b7f18aca1950fc9ba13857d03c8038ee4c06b30f54\": rpc error: code = NotFound desc = could not find container \"3980e1becb89fb3d7aec65b7f18aca1950fc9ba13857d03c8038ee4c06b30f54\": container with ID starting with 3980e1becb89fb3d7aec65b7f18aca1950fc9ba13857d03c8038ee4c06b30f54 not found: ID does not exist" Dec 02 11:00:29 crc kubenswrapper[4711]: I1202 11:00:29.103794 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" path="/var/lib/kubelet/pods/1dfde7de-74c7-49cf-a4f9-eb3bf45fce93/volumes" Dec 02 11:00:37 crc kubenswrapper[4711]: I1202 11:00:37.056429 4711 scope.go:117] "RemoveContainer" containerID="570d7a98c3d76ef9a0ed0a24d03e45cdf052691099142369ff66aa174405ad7f" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.171272 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29411221-c27ql"] Dec 02 11:01:00 crc kubenswrapper[4711]: E1202 11:01:00.172777 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" containerName="extract-content" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.172815 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" containerName="extract-content" Dec 02 11:01:00 crc kubenswrapper[4711]: E1202 11:01:00.172860 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" containerName="registry-server" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.172868 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" containerName="registry-server" Dec 02 11:01:00 crc kubenswrapper[4711]: E1202 11:01:00.172889 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" containerName="extract-utilities" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.172898 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" containerName="extract-utilities" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.173197 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dfde7de-74c7-49cf-a4f9-eb3bf45fce93" containerName="registry-server" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.174095 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.183084 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29411221-c27ql"] Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.307819 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44nw5\" (UniqueName: \"kubernetes.io/projected/37b4f06f-7175-4bee-85ee-970775ae49a8-kube-api-access-44nw5\") pod \"keystone-cron-29411221-c27ql\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.307888 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-config-data\") pod \"keystone-cron-29411221-c27ql\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.307923 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-combined-ca-bundle\") pod \"keystone-cron-29411221-c27ql\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.308090 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-fernet-keys\") pod \"keystone-cron-29411221-c27ql\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.409511 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-fernet-keys\") pod \"keystone-cron-29411221-c27ql\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.409592 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44nw5\" (UniqueName: \"kubernetes.io/projected/37b4f06f-7175-4bee-85ee-970775ae49a8-kube-api-access-44nw5\") pod \"keystone-cron-29411221-c27ql\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.409622 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-config-data\") pod \"keystone-cron-29411221-c27ql\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.409651 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-combined-ca-bundle\") pod \"keystone-cron-29411221-c27ql\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.415646 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-config-data\") pod \"keystone-cron-29411221-c27ql\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.416432 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-fernet-keys\") pod \"keystone-cron-29411221-c27ql\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.417336 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-combined-ca-bundle\") pod \"keystone-cron-29411221-c27ql\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.433071 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44nw5\" (UniqueName: \"kubernetes.io/projected/37b4f06f-7175-4bee-85ee-970775ae49a8-kube-api-access-44nw5\") pod \"keystone-cron-29411221-c27ql\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.496364 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:00 crc kubenswrapper[4711]: I1202 11:01:00.995773 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29411221-c27ql"] Dec 02 11:01:01 crc kubenswrapper[4711]: I1202 11:01:01.528974 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411221-c27ql" event={"ID":"37b4f06f-7175-4bee-85ee-970775ae49a8","Type":"ContainerStarted","Data":"2a76a4a562d9a8f867c36270cc7d08b67711eb2e8a523064cd677d5d22b80a4e"} Dec 02 11:01:01 crc kubenswrapper[4711]: I1202 11:01:01.529327 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411221-c27ql" event={"ID":"37b4f06f-7175-4bee-85ee-970775ae49a8","Type":"ContainerStarted","Data":"b5e64c838703e3262ec4f52cfe722dcbe29bd229be587bc8932c533da3bdd536"} Dec 02 11:01:01 crc kubenswrapper[4711]: I1202 11:01:01.560871 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29411221-c27ql" podStartSLOduration=1.5608362489999998 podStartE2EDuration="1.560836249s" podCreationTimestamp="2025-12-02 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:01:01.549766853 +0000 UTC m=+2851.259133330" watchObservedRunningTime="2025-12-02 11:01:01.560836249 +0000 UTC m=+2851.270202706" Dec 02 11:01:03 crc kubenswrapper[4711]: I1202 11:01:03.552684 4711 generic.go:334] "Generic (PLEG): container finished" podID="37b4f06f-7175-4bee-85ee-970775ae49a8" containerID="2a76a4a562d9a8f867c36270cc7d08b67711eb2e8a523064cd677d5d22b80a4e" exitCode=0 Dec 02 11:01:03 crc kubenswrapper[4711]: I1202 11:01:03.552774 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411221-c27ql" event={"ID":"37b4f06f-7175-4bee-85ee-970775ae49a8","Type":"ContainerDied","Data":"2a76a4a562d9a8f867c36270cc7d08b67711eb2e8a523064cd677d5d22b80a4e"} Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.026518 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.127363 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-config-data\") pod \"37b4f06f-7175-4bee-85ee-970775ae49a8\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.127423 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44nw5\" (UniqueName: \"kubernetes.io/projected/37b4f06f-7175-4bee-85ee-970775ae49a8-kube-api-access-44nw5\") pod \"37b4f06f-7175-4bee-85ee-970775ae49a8\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.128158 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-fernet-keys\") pod \"37b4f06f-7175-4bee-85ee-970775ae49a8\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.128674 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-combined-ca-bundle\") pod \"37b4f06f-7175-4bee-85ee-970775ae49a8\" (UID: \"37b4f06f-7175-4bee-85ee-970775ae49a8\") " Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.134621 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b4f06f-7175-4bee-85ee-970775ae49a8-kube-api-access-44nw5" (OuterVolumeSpecName: "kube-api-access-44nw5") pod "37b4f06f-7175-4bee-85ee-970775ae49a8" (UID: "37b4f06f-7175-4bee-85ee-970775ae49a8"). InnerVolumeSpecName "kube-api-access-44nw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.135444 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "37b4f06f-7175-4bee-85ee-970775ae49a8" (UID: "37b4f06f-7175-4bee-85ee-970775ae49a8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.171069 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37b4f06f-7175-4bee-85ee-970775ae49a8" (UID: "37b4f06f-7175-4bee-85ee-970775ae49a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.190195 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-config-data" (OuterVolumeSpecName: "config-data") pod "37b4f06f-7175-4bee-85ee-970775ae49a8" (UID: "37b4f06f-7175-4bee-85ee-970775ae49a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.231199 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.231238 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44nw5\" (UniqueName: \"kubernetes.io/projected/37b4f06f-7175-4bee-85ee-970775ae49a8-kube-api-access-44nw5\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.231265 4711 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.231286 4711 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b4f06f-7175-4bee-85ee-970775ae49a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.579911 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411221-c27ql" event={"ID":"37b4f06f-7175-4bee-85ee-970775ae49a8","Type":"ContainerDied","Data":"b5e64c838703e3262ec4f52cfe722dcbe29bd229be587bc8932c533da3bdd536"} Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.580041 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411221-c27ql" Dec 02 11:01:05 crc kubenswrapper[4711]: I1202 11:01:05.580156 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5e64c838703e3262ec4f52cfe722dcbe29bd229be587bc8932c533da3bdd536" Dec 02 11:01:22 crc kubenswrapper[4711]: I1202 11:01:22.586058 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:01:22 crc kubenswrapper[4711]: I1202 11:01:22.586919 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:01:52 crc kubenswrapper[4711]: I1202 11:01:52.586047 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:01:52 crc kubenswrapper[4711]: I1202 11:01:52.586705 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:02:20 crc kubenswrapper[4711]: I1202 11:02:20.842430 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qdvn8"] Dec 02 11:02:20 crc kubenswrapper[4711]: E1202 11:02:20.844030 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b4f06f-7175-4bee-85ee-970775ae49a8" containerName="keystone-cron" Dec 02 11:02:20 crc kubenswrapper[4711]: I1202 11:02:20.844063 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b4f06f-7175-4bee-85ee-970775ae49a8" containerName="keystone-cron" Dec 02 11:02:20 crc kubenswrapper[4711]: I1202 11:02:20.844509 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b4f06f-7175-4bee-85ee-970775ae49a8" containerName="keystone-cron" Dec 02 11:02:20 crc kubenswrapper[4711]: I1202 11:02:20.846923 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:20 crc kubenswrapper[4711]: I1202 11:02:20.861721 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qdvn8"] Dec 02 11:02:21 crc kubenswrapper[4711]: I1202 11:02:21.011627 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42987ca3-6b4d-477a-8498-56286df48235-catalog-content\") pod \"community-operators-qdvn8\" (UID: \"42987ca3-6b4d-477a-8498-56286df48235\") " pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:21 crc kubenswrapper[4711]: I1202 11:02:21.012040 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42987ca3-6b4d-477a-8498-56286df48235-utilities\") pod \"community-operators-qdvn8\" (UID: \"42987ca3-6b4d-477a-8498-56286df48235\") " pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:21 crc kubenswrapper[4711]: I1202 11:02:21.012150 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbzmz\" (UniqueName: \"kubernetes.io/projected/42987ca3-6b4d-477a-8498-56286df48235-kube-api-access-cbzmz\") pod \"community-operators-qdvn8\" (UID: \"42987ca3-6b4d-477a-8498-56286df48235\") " pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:21 crc kubenswrapper[4711]: I1202 11:02:21.114206 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzmz\" (UniqueName: \"kubernetes.io/projected/42987ca3-6b4d-477a-8498-56286df48235-kube-api-access-cbzmz\") pod \"community-operators-qdvn8\" (UID: \"42987ca3-6b4d-477a-8498-56286df48235\") " pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:21 crc kubenswrapper[4711]: I1202 11:02:21.114676 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42987ca3-6b4d-477a-8498-56286df48235-catalog-content\") pod \"community-operators-qdvn8\" (UID: \"42987ca3-6b4d-477a-8498-56286df48235\") " pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:21 crc kubenswrapper[4711]: I1202 11:02:21.114840 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42987ca3-6b4d-477a-8498-56286df48235-utilities\") pod \"community-operators-qdvn8\" (UID: \"42987ca3-6b4d-477a-8498-56286df48235\") " pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:21 crc kubenswrapper[4711]: I1202 11:02:21.115701 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42987ca3-6b4d-477a-8498-56286df48235-utilities\") pod \"community-operators-qdvn8\" (UID: \"42987ca3-6b4d-477a-8498-56286df48235\") " pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:21 crc kubenswrapper[4711]: I1202 11:02:21.115942 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42987ca3-6b4d-477a-8498-56286df48235-catalog-content\") pod \"community-operators-qdvn8\" (UID: \"42987ca3-6b4d-477a-8498-56286df48235\") " pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:21 crc kubenswrapper[4711]: I1202 11:02:21.138132 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbzmz\" (UniqueName: \"kubernetes.io/projected/42987ca3-6b4d-477a-8498-56286df48235-kube-api-access-cbzmz\") pod \"community-operators-qdvn8\" (UID: \"42987ca3-6b4d-477a-8498-56286df48235\") " pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:21 crc kubenswrapper[4711]: I1202 11:02:21.184128 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:21 crc kubenswrapper[4711]: I1202 11:02:21.713776 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qdvn8"] Dec 02 11:02:22 crc kubenswrapper[4711]: I1202 11:02:22.407500 4711 generic.go:334] "Generic (PLEG): container finished" podID="42987ca3-6b4d-477a-8498-56286df48235" containerID="ace29b8af44cfe0b144dbbce405f55e2754c4194284d7cac287849c87495094a" exitCode=0 Dec 02 11:02:22 crc kubenswrapper[4711]: I1202 11:02:22.407567 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdvn8" event={"ID":"42987ca3-6b4d-477a-8498-56286df48235","Type":"ContainerDied","Data":"ace29b8af44cfe0b144dbbce405f55e2754c4194284d7cac287849c87495094a"} Dec 02 11:02:22 crc kubenswrapper[4711]: I1202 11:02:22.407776 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdvn8" event={"ID":"42987ca3-6b4d-477a-8498-56286df48235","Type":"ContainerStarted","Data":"57eef7a6dc55cf59259d074024d8871fa6dca9811535e8d404e406e72bb5a4af"} Dec 02 11:02:22 crc kubenswrapper[4711]: I1202 11:02:22.586505 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:02:22 crc kubenswrapper[4711]: I1202 11:02:22.586583 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:02:22 crc kubenswrapper[4711]: I1202 11:02:22.586636 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 11:02:22 crc kubenswrapper[4711]: I1202 11:02:22.587519 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:02:22 crc kubenswrapper[4711]: I1202 11:02:22.587596 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" gracePeriod=600 Dec 02 11:02:22 crc kubenswrapper[4711]: E1202 11:02:22.725030 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:02:23 crc kubenswrapper[4711]: I1202 11:02:23.421695 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" exitCode=0 Dec 02 11:02:23 crc kubenswrapper[4711]: I1202 11:02:23.421792 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d"} Dec 02 11:02:23 crc kubenswrapper[4711]: I1202 11:02:23.422795 4711 scope.go:117] "RemoveContainer" containerID="ce1d82b527eae7c31f9a034a481bb3607adffd804c9c681434ee7921132c3317" Dec 02 11:02:23 crc kubenswrapper[4711]: I1202 11:02:23.423820 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:02:23 crc kubenswrapper[4711]: E1202 11:02:23.424360 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:02:24 crc kubenswrapper[4711]: I1202 11:02:24.432845 4711 generic.go:334] "Generic (PLEG): container finished" podID="42987ca3-6b4d-477a-8498-56286df48235" containerID="db485770d45fb32207a16a4051b01b3b998ca0d804ed2377f176b571e63b290b" exitCode=0 Dec 02 11:02:24 crc kubenswrapper[4711]: I1202 11:02:24.432944 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdvn8" event={"ID":"42987ca3-6b4d-477a-8498-56286df48235","Type":"ContainerDied","Data":"db485770d45fb32207a16a4051b01b3b998ca0d804ed2377f176b571e63b290b"} Dec 02 11:02:25 crc kubenswrapper[4711]: I1202 11:02:25.460635 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdvn8" event={"ID":"42987ca3-6b4d-477a-8498-56286df48235","Type":"ContainerStarted","Data":"1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c"} Dec 02 11:02:25 crc kubenswrapper[4711]: I1202 11:02:25.479371 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qdvn8" podStartSLOduration=2.81092019 podStartE2EDuration="5.479340708s" podCreationTimestamp="2025-12-02 11:02:20 +0000 UTC" firstStartedPulling="2025-12-02 11:02:22.409889975 +0000 UTC m=+2932.119256432" lastFinishedPulling="2025-12-02 11:02:25.078310463 +0000 UTC m=+2934.787676950" observedRunningTime="2025-12-02 11:02:25.476180724 +0000 UTC m=+2935.185547171" watchObservedRunningTime="2025-12-02 11:02:25.479340708 +0000 UTC m=+2935.188707155" Dec 02 11:02:31 crc kubenswrapper[4711]: I1202 11:02:31.186235 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:31 crc kubenswrapper[4711]: I1202 11:02:31.186975 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:31 crc kubenswrapper[4711]: I1202 11:02:31.252839 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:31 crc kubenswrapper[4711]: I1202 11:02:31.571892 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:31 crc kubenswrapper[4711]: I1202 11:02:31.635883 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qdvn8"] Dec 02 11:02:33 crc kubenswrapper[4711]: I1202 11:02:33.539036 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qdvn8" podUID="42987ca3-6b4d-477a-8498-56286df48235" containerName="registry-server" containerID="cri-o://1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c" gracePeriod=2 Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.226747 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.253316 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42987ca3-6b4d-477a-8498-56286df48235-catalog-content\") pod \"42987ca3-6b4d-477a-8498-56286df48235\" (UID: \"42987ca3-6b4d-477a-8498-56286df48235\") " Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.253705 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42987ca3-6b4d-477a-8498-56286df48235-utilities\") pod \"42987ca3-6b4d-477a-8498-56286df48235\" (UID: \"42987ca3-6b4d-477a-8498-56286df48235\") " Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.253858 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbzmz\" (UniqueName: \"kubernetes.io/projected/42987ca3-6b4d-477a-8498-56286df48235-kube-api-access-cbzmz\") pod \"42987ca3-6b4d-477a-8498-56286df48235\" (UID: \"42987ca3-6b4d-477a-8498-56286df48235\") " Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.254447 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42987ca3-6b4d-477a-8498-56286df48235-utilities" (OuterVolumeSpecName: "utilities") pod "42987ca3-6b4d-477a-8498-56286df48235" (UID: "42987ca3-6b4d-477a-8498-56286df48235"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.255067 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42987ca3-6b4d-477a-8498-56286df48235-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.261244 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42987ca3-6b4d-477a-8498-56286df48235-kube-api-access-cbzmz" (OuterVolumeSpecName: "kube-api-access-cbzmz") pod "42987ca3-6b4d-477a-8498-56286df48235" (UID: "42987ca3-6b4d-477a-8498-56286df48235"). InnerVolumeSpecName "kube-api-access-cbzmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.313770 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42987ca3-6b4d-477a-8498-56286df48235-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42987ca3-6b4d-477a-8498-56286df48235" (UID: "42987ca3-6b4d-477a-8498-56286df48235"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.356484 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbzmz\" (UniqueName: \"kubernetes.io/projected/42987ca3-6b4d-477a-8498-56286df48235-kube-api-access-cbzmz\") on node \"crc\" DevicePath \"\"" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.356532 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42987ca3-6b4d-477a-8498-56286df48235-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.558339 4711 generic.go:334] "Generic (PLEG): container finished" podID="42987ca3-6b4d-477a-8498-56286df48235" containerID="1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c" exitCode=0 Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.558458 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdvn8" event={"ID":"42987ca3-6b4d-477a-8498-56286df48235","Type":"ContainerDied","Data":"1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c"} Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.558485 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdvn8" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.558505 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdvn8" event={"ID":"42987ca3-6b4d-477a-8498-56286df48235","Type":"ContainerDied","Data":"57eef7a6dc55cf59259d074024d8871fa6dca9811535e8d404e406e72bb5a4af"} Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.558524 4711 scope.go:117] "RemoveContainer" containerID="1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.592923 4711 scope.go:117] "RemoveContainer" containerID="db485770d45fb32207a16a4051b01b3b998ca0d804ed2377f176b571e63b290b" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.612812 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qdvn8"] Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.622355 4711 scope.go:117] "RemoveContainer" containerID="ace29b8af44cfe0b144dbbce405f55e2754c4194284d7cac287849c87495094a" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.627561 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qdvn8"] Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.680939 4711 scope.go:117] "RemoveContainer" containerID="1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c" Dec 02 11:02:34 crc kubenswrapper[4711]: E1202 11:02:34.681776 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c\": container with ID starting with 1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c not found: ID does not exist" containerID="1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.681840 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c"} err="failed to get container status \"1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c\": rpc error: code = NotFound desc = could not find container \"1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c\": container with ID starting with 1f52a6956164aa2a8ef2c7d0124a3c9569154a0d99f65bd5c1d67c2cff15442c not found: ID does not exist" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.681891 4711 scope.go:117] "RemoveContainer" containerID="db485770d45fb32207a16a4051b01b3b998ca0d804ed2377f176b571e63b290b" Dec 02 11:02:34 crc kubenswrapper[4711]: E1202 11:02:34.682418 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db485770d45fb32207a16a4051b01b3b998ca0d804ed2377f176b571e63b290b\": container with ID starting with db485770d45fb32207a16a4051b01b3b998ca0d804ed2377f176b571e63b290b not found: ID does not exist" containerID="db485770d45fb32207a16a4051b01b3b998ca0d804ed2377f176b571e63b290b" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.682474 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db485770d45fb32207a16a4051b01b3b998ca0d804ed2377f176b571e63b290b"} err="failed to get container status \"db485770d45fb32207a16a4051b01b3b998ca0d804ed2377f176b571e63b290b\": rpc error: code = NotFound desc = could not find container \"db485770d45fb32207a16a4051b01b3b998ca0d804ed2377f176b571e63b290b\": container with ID starting with db485770d45fb32207a16a4051b01b3b998ca0d804ed2377f176b571e63b290b not found: ID does not exist" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.682519 4711 scope.go:117] "RemoveContainer" containerID="ace29b8af44cfe0b144dbbce405f55e2754c4194284d7cac287849c87495094a" Dec 02 11:02:34 crc kubenswrapper[4711]: E1202 11:02:34.683034 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace29b8af44cfe0b144dbbce405f55e2754c4194284d7cac287849c87495094a\": container with ID starting with ace29b8af44cfe0b144dbbce405f55e2754c4194284d7cac287849c87495094a not found: ID does not exist" containerID="ace29b8af44cfe0b144dbbce405f55e2754c4194284d7cac287849c87495094a" Dec 02 11:02:34 crc kubenswrapper[4711]: I1202 11:02:34.683078 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace29b8af44cfe0b144dbbce405f55e2754c4194284d7cac287849c87495094a"} err="failed to get container status \"ace29b8af44cfe0b144dbbce405f55e2754c4194284d7cac287849c87495094a\": rpc error: code = NotFound desc = could not find container \"ace29b8af44cfe0b144dbbce405f55e2754c4194284d7cac287849c87495094a\": container with ID starting with ace29b8af44cfe0b144dbbce405f55e2754c4194284d7cac287849c87495094a not found: ID does not exist" Dec 02 11:02:35 crc kubenswrapper[4711]: I1202 11:02:35.078850 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:02:35 crc kubenswrapper[4711]: E1202 11:02:35.079501 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:02:35 crc kubenswrapper[4711]: I1202 11:02:35.095172 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42987ca3-6b4d-477a-8498-56286df48235" path="/var/lib/kubelet/pods/42987ca3-6b4d-477a-8498-56286df48235/volumes" Dec 02 11:02:49 crc kubenswrapper[4711]: I1202 11:02:49.079144 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:02:49 crc kubenswrapper[4711]: E1202 11:02:49.080104 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:03:04 crc kubenswrapper[4711]: I1202 11:03:04.078656 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:03:04 crc kubenswrapper[4711]: E1202 11:03:04.079337 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:03:17 crc kubenswrapper[4711]: I1202 11:03:17.078438 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:03:17 crc kubenswrapper[4711]: E1202 11:03:17.079123 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:03:30 crc kubenswrapper[4711]: I1202 11:03:30.079140 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:03:30 crc kubenswrapper[4711]: E1202 11:03:30.080407 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:03:45 crc kubenswrapper[4711]: I1202 11:03:45.081943 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:03:45 crc kubenswrapper[4711]: E1202 11:03:45.082749 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:03:57 crc kubenswrapper[4711]: I1202 11:03:57.078257 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:03:57 crc kubenswrapper[4711]: E1202 11:03:57.079093 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:04:08 crc kubenswrapper[4711]: I1202 11:04:08.080738 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:04:08 crc kubenswrapper[4711]: E1202 11:04:08.081808 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:04:20 crc kubenswrapper[4711]: I1202 11:04:20.079179 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:04:20 crc kubenswrapper[4711]: E1202 11:04:20.081878 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:04:34 crc kubenswrapper[4711]: I1202 11:04:34.078172 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:04:34 crc kubenswrapper[4711]: E1202 11:04:34.079106 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:04:47 crc kubenswrapper[4711]: I1202 11:04:47.078473 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:04:47 crc kubenswrapper[4711]: E1202 11:04:47.079460 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:05:02 crc kubenswrapper[4711]: I1202 11:05:02.081831 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:05:02 crc kubenswrapper[4711]: E1202 11:05:02.083347 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:05:13 crc kubenswrapper[4711]: I1202 11:05:13.079331 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:05:13 crc kubenswrapper[4711]: E1202 11:05:13.080279 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:05:26 crc kubenswrapper[4711]: I1202 11:05:26.079468 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:05:26 crc kubenswrapper[4711]: E1202 11:05:26.080758 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:05:37 crc kubenswrapper[4711]: I1202 11:05:37.078787 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:05:37 crc kubenswrapper[4711]: E1202 11:05:37.079867 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:05:50 crc kubenswrapper[4711]: I1202 11:05:50.078605 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:05:50 crc kubenswrapper[4711]: E1202 11:05:50.079311 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:06:03 crc kubenswrapper[4711]: I1202 11:06:03.079649 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:06:03 crc kubenswrapper[4711]: E1202 11:06:03.080709 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:06:16 crc kubenswrapper[4711]: I1202 11:06:16.078509 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:06:16 crc kubenswrapper[4711]: E1202 11:06:16.079439 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:06:31 crc kubenswrapper[4711]: I1202 11:06:31.087884 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:06:31 crc kubenswrapper[4711]: E1202 11:06:31.090271 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:06:44 crc kubenswrapper[4711]: I1202 11:06:44.079306 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:06:44 crc kubenswrapper[4711]: E1202 11:06:44.080587 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:06:58 crc kubenswrapper[4711]: I1202 11:06:58.078605 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:06:58 crc kubenswrapper[4711]: E1202 11:06:58.079573 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:07:11 crc kubenswrapper[4711]: I1202 11:07:11.090831 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:07:11 crc kubenswrapper[4711]: E1202 11:07:11.092049 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:07:26 crc kubenswrapper[4711]: I1202 11:07:26.078450 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:07:26 crc kubenswrapper[4711]: I1202 11:07:26.840186 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"bcb5ee66f74bc7fa910b7fd68c8b56646b704858f1b9b5bdf111d82410fbe2fc"} Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.606340 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4ckhq"] Dec 02 11:09:24 crc kubenswrapper[4711]: E1202 11:09:24.607511 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42987ca3-6b4d-477a-8498-56286df48235" containerName="registry-server" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.607551 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="42987ca3-6b4d-477a-8498-56286df48235" containerName="registry-server" Dec 02 11:09:24 crc kubenswrapper[4711]: E1202 11:09:24.607573 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42987ca3-6b4d-477a-8498-56286df48235" containerName="extract-content" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.607581 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="42987ca3-6b4d-477a-8498-56286df48235" containerName="extract-content" Dec 02 11:09:24 crc kubenswrapper[4711]: E1202 11:09:24.607634 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42987ca3-6b4d-477a-8498-56286df48235" containerName="extract-utilities" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.607642 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="42987ca3-6b4d-477a-8498-56286df48235" containerName="extract-utilities" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.607876 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="42987ca3-6b4d-477a-8498-56286df48235" containerName="registry-server" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.609836 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.628137 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4ckhq"] Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.689574 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5323e55-16c6-4f6f-bfef-a5da4d10c278-utilities\") pod \"redhat-marketplace-4ckhq\" (UID: \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\") " pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.689674 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5323e55-16c6-4f6f-bfef-a5da4d10c278-catalog-content\") pod \"redhat-marketplace-4ckhq\" (UID: \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\") " pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.689883 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjwzm\" (UniqueName: \"kubernetes.io/projected/b5323e55-16c6-4f6f-bfef-a5da4d10c278-kube-api-access-gjwzm\") pod \"redhat-marketplace-4ckhq\" (UID: \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\") " pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.791821 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjwzm\" (UniqueName: \"kubernetes.io/projected/b5323e55-16c6-4f6f-bfef-a5da4d10c278-kube-api-access-gjwzm\") pod \"redhat-marketplace-4ckhq\" (UID: \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\") " pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.791931 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5323e55-16c6-4f6f-bfef-a5da4d10c278-utilities\") pod \"redhat-marketplace-4ckhq\" (UID: \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\") " pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.792010 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5323e55-16c6-4f6f-bfef-a5da4d10c278-catalog-content\") pod \"redhat-marketplace-4ckhq\" (UID: \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\") " pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.792381 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5323e55-16c6-4f6f-bfef-a5da4d10c278-utilities\") pod \"redhat-marketplace-4ckhq\" (UID: \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\") " pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.792407 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5323e55-16c6-4f6f-bfef-a5da4d10c278-catalog-content\") pod \"redhat-marketplace-4ckhq\" (UID: \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\") " pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.815737 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjwzm\" (UniqueName: \"kubernetes.io/projected/b5323e55-16c6-4f6f-bfef-a5da4d10c278-kube-api-access-gjwzm\") pod \"redhat-marketplace-4ckhq\" (UID: \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\") " pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:24 crc kubenswrapper[4711]: I1202 11:09:24.930181 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:25 crc kubenswrapper[4711]: I1202 11:09:25.436673 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4ckhq"] Dec 02 11:09:26 crc kubenswrapper[4711]: I1202 11:09:26.104077 4711 generic.go:334] "Generic (PLEG): container finished" podID="b5323e55-16c6-4f6f-bfef-a5da4d10c278" containerID="fed464cd6156082347d0bb0829674cd6eec27a325a97c96e2dd0174ff8b88ec6" exitCode=0 Dec 02 11:09:26 crc kubenswrapper[4711]: I1202 11:09:26.104139 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ckhq" event={"ID":"b5323e55-16c6-4f6f-bfef-a5da4d10c278","Type":"ContainerDied","Data":"fed464cd6156082347d0bb0829674cd6eec27a325a97c96e2dd0174ff8b88ec6"} Dec 02 11:09:26 crc kubenswrapper[4711]: I1202 11:09:26.104604 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ckhq" event={"ID":"b5323e55-16c6-4f6f-bfef-a5da4d10c278","Type":"ContainerStarted","Data":"30210169f899230a1334823d5700c74571c9491f86610dbcb28e2f95e16c8ee7"} Dec 02 11:09:26 crc kubenswrapper[4711]: I1202 11:09:26.107399 4711 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:09:27 crc kubenswrapper[4711]: I1202 11:09:27.117717 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ckhq" event={"ID":"b5323e55-16c6-4f6f-bfef-a5da4d10c278","Type":"ContainerStarted","Data":"1d2bc0bfd0c45c5759f19997b90d64dad0200bd81382849cd443fc53a6bc83cb"} Dec 02 11:09:28 crc kubenswrapper[4711]: I1202 11:09:28.132311 4711 generic.go:334] "Generic (PLEG): container finished" podID="b5323e55-16c6-4f6f-bfef-a5da4d10c278" containerID="1d2bc0bfd0c45c5759f19997b90d64dad0200bd81382849cd443fc53a6bc83cb" exitCode=0 Dec 02 11:09:28 crc kubenswrapper[4711]: I1202 11:09:28.132383 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ckhq" event={"ID":"b5323e55-16c6-4f6f-bfef-a5da4d10c278","Type":"ContainerDied","Data":"1d2bc0bfd0c45c5759f19997b90d64dad0200bd81382849cd443fc53a6bc83cb"} Dec 02 11:09:29 crc kubenswrapper[4711]: I1202 11:09:29.157538 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ckhq" event={"ID":"b5323e55-16c6-4f6f-bfef-a5da4d10c278","Type":"ContainerStarted","Data":"db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3"} Dec 02 11:09:29 crc kubenswrapper[4711]: I1202 11:09:29.185819 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4ckhq" podStartSLOduration=2.431815947 podStartE2EDuration="5.185768032s" podCreationTimestamp="2025-12-02 11:09:24 +0000 UTC" firstStartedPulling="2025-12-02 11:09:26.106883997 +0000 UTC m=+3355.816250474" lastFinishedPulling="2025-12-02 11:09:28.860836102 +0000 UTC m=+3358.570202559" observedRunningTime="2025-12-02 11:09:29.180773849 +0000 UTC m=+3358.890140356" watchObservedRunningTime="2025-12-02 11:09:29.185768032 +0000 UTC m=+3358.895134469" Dec 02 11:09:34 crc kubenswrapper[4711]: I1202 11:09:34.931893 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:34 crc kubenswrapper[4711]: I1202 11:09:34.933664 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:35 crc kubenswrapper[4711]: I1202 11:09:35.006142 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:35 crc kubenswrapper[4711]: I1202 11:09:35.274398 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:35 crc kubenswrapper[4711]: I1202 11:09:35.370010 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4ckhq"] Dec 02 11:09:37 crc kubenswrapper[4711]: I1202 11:09:37.249569 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4ckhq" podUID="b5323e55-16c6-4f6f-bfef-a5da4d10c278" containerName="registry-server" containerID="cri-o://db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3" gracePeriod=2 Dec 02 11:09:37 crc kubenswrapper[4711]: I1202 11:09:37.792400 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:37 crc kubenswrapper[4711]: I1202 11:09:37.866170 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5323e55-16c6-4f6f-bfef-a5da4d10c278-utilities\") pod \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\" (UID: \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\") " Dec 02 11:09:37 crc kubenswrapper[4711]: I1202 11:09:37.866411 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjwzm\" (UniqueName: \"kubernetes.io/projected/b5323e55-16c6-4f6f-bfef-a5da4d10c278-kube-api-access-gjwzm\") pod \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\" (UID: \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\") " Dec 02 11:09:37 crc kubenswrapper[4711]: I1202 11:09:37.866533 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5323e55-16c6-4f6f-bfef-a5da4d10c278-catalog-content\") pod \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\" (UID: \"b5323e55-16c6-4f6f-bfef-a5da4d10c278\") " Dec 02 11:09:37 crc kubenswrapper[4711]: I1202 11:09:37.867834 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5323e55-16c6-4f6f-bfef-a5da4d10c278-utilities" (OuterVolumeSpecName: "utilities") pod "b5323e55-16c6-4f6f-bfef-a5da4d10c278" (UID: "b5323e55-16c6-4f6f-bfef-a5da4d10c278"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:09:37 crc kubenswrapper[4711]: I1202 11:09:37.873011 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5323e55-16c6-4f6f-bfef-a5da4d10c278-kube-api-access-gjwzm" (OuterVolumeSpecName: "kube-api-access-gjwzm") pod "b5323e55-16c6-4f6f-bfef-a5da4d10c278" (UID: "b5323e55-16c6-4f6f-bfef-a5da4d10c278"). InnerVolumeSpecName "kube-api-access-gjwzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:09:37 crc kubenswrapper[4711]: I1202 11:09:37.886820 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5323e55-16c6-4f6f-bfef-a5da4d10c278-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5323e55-16c6-4f6f-bfef-a5da4d10c278" (UID: "b5323e55-16c6-4f6f-bfef-a5da4d10c278"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:09:37 crc kubenswrapper[4711]: I1202 11:09:37.969019 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5323e55-16c6-4f6f-bfef-a5da4d10c278-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:37 crc kubenswrapper[4711]: I1202 11:09:37.969299 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjwzm\" (UniqueName: \"kubernetes.io/projected/b5323e55-16c6-4f6f-bfef-a5da4d10c278-kube-api-access-gjwzm\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:37 crc kubenswrapper[4711]: I1202 11:09:37.969311 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5323e55-16c6-4f6f-bfef-a5da4d10c278-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.264339 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4ckhq" Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.264270 4711 generic.go:334] "Generic (PLEG): container finished" podID="b5323e55-16c6-4f6f-bfef-a5da4d10c278" containerID="db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3" exitCode=0 Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.264357 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ckhq" event={"ID":"b5323e55-16c6-4f6f-bfef-a5da4d10c278","Type":"ContainerDied","Data":"db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3"} Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.264419 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ckhq" event={"ID":"b5323e55-16c6-4f6f-bfef-a5da4d10c278","Type":"ContainerDied","Data":"30210169f899230a1334823d5700c74571c9491f86610dbcb28e2f95e16c8ee7"} Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.264446 4711 scope.go:117] "RemoveContainer" containerID="db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3" Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.311528 4711 scope.go:117] "RemoveContainer" containerID="1d2bc0bfd0c45c5759f19997b90d64dad0200bd81382849cd443fc53a6bc83cb" Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.318083 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4ckhq"] Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.325919 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4ckhq"] Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.332402 4711 scope.go:117] "RemoveContainer" containerID="fed464cd6156082347d0bb0829674cd6eec27a325a97c96e2dd0174ff8b88ec6" Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.376652 4711 scope.go:117] "RemoveContainer" containerID="db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3" Dec 02 11:09:38 crc kubenswrapper[4711]: E1202 11:09:38.377170 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3\": container with ID starting with db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3 not found: ID does not exist" containerID="db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3" Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.377317 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3"} err="failed to get container status \"db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3\": rpc error: code = NotFound desc = could not find container \"db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3\": container with ID starting with db423d275ebe989d2ae6ffaef01566d2ffa09e94a6e70593ee1951a48ad3fcd3 not found: ID does not exist" Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.377408 4711 scope.go:117] "RemoveContainer" containerID="1d2bc0bfd0c45c5759f19997b90d64dad0200bd81382849cd443fc53a6bc83cb" Dec 02 11:09:38 crc kubenswrapper[4711]: E1202 11:09:38.377905 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d2bc0bfd0c45c5759f19997b90d64dad0200bd81382849cd443fc53a6bc83cb\": container with ID starting with 1d2bc0bfd0c45c5759f19997b90d64dad0200bd81382849cd443fc53a6bc83cb not found: ID does not exist" containerID="1d2bc0bfd0c45c5759f19997b90d64dad0200bd81382849cd443fc53a6bc83cb" Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.377979 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d2bc0bfd0c45c5759f19997b90d64dad0200bd81382849cd443fc53a6bc83cb"} err="failed to get container status \"1d2bc0bfd0c45c5759f19997b90d64dad0200bd81382849cd443fc53a6bc83cb\": rpc error: code = NotFound desc = could not find container \"1d2bc0bfd0c45c5759f19997b90d64dad0200bd81382849cd443fc53a6bc83cb\": container with ID starting with 1d2bc0bfd0c45c5759f19997b90d64dad0200bd81382849cd443fc53a6bc83cb not found: ID does not exist" Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.378016 4711 scope.go:117] "RemoveContainer" containerID="fed464cd6156082347d0bb0829674cd6eec27a325a97c96e2dd0174ff8b88ec6" Dec 02 11:09:38 crc kubenswrapper[4711]: E1202 11:09:38.378391 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed464cd6156082347d0bb0829674cd6eec27a325a97c96e2dd0174ff8b88ec6\": container with ID starting with fed464cd6156082347d0bb0829674cd6eec27a325a97c96e2dd0174ff8b88ec6 not found: ID does not exist" containerID="fed464cd6156082347d0bb0829674cd6eec27a325a97c96e2dd0174ff8b88ec6" Dec 02 11:09:38 crc kubenswrapper[4711]: I1202 11:09:38.378500 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed464cd6156082347d0bb0829674cd6eec27a325a97c96e2dd0174ff8b88ec6"} err="failed to get container status \"fed464cd6156082347d0bb0829674cd6eec27a325a97c96e2dd0174ff8b88ec6\": rpc error: code = NotFound desc = could not find container \"fed464cd6156082347d0bb0829674cd6eec27a325a97c96e2dd0174ff8b88ec6\": container with ID starting with fed464cd6156082347d0bb0829674cd6eec27a325a97c96e2dd0174ff8b88ec6 not found: ID does not exist" Dec 02 11:09:39 crc kubenswrapper[4711]: I1202 11:09:39.092028 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5323e55-16c6-4f6f-bfef-a5da4d10c278" path="/var/lib/kubelet/pods/b5323e55-16c6-4f6f-bfef-a5da4d10c278/volumes" Dec 02 11:09:52 crc kubenswrapper[4711]: I1202 11:09:52.585549 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:09:52 crc kubenswrapper[4711]: I1202 11:09:52.586283 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:09:58 crc kubenswrapper[4711]: I1202 11:09:58.496449 4711 generic.go:334] "Generic (PLEG): container finished" podID="725581bd-6264-4ca6-b1fa-126c3c50800b" containerID="c63722886c96dd1326b26496648f05cb1d03d4f44c013f31c5950d7f285b9d77" exitCode=0 Dec 02 11:09:58 crc kubenswrapper[4711]: I1202 11:09:58.496545 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"725581bd-6264-4ca6-b1fa-126c3c50800b","Type":"ContainerDied","Data":"c63722886c96dd1326b26496648f05cb1d03d4f44c013f31c5950d7f285b9d77"} Dec 02 11:09:59 crc kubenswrapper[4711]: I1202 11:09:59.989331 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.065320 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2rrx\" (UniqueName: \"kubernetes.io/projected/725581bd-6264-4ca6-b1fa-126c3c50800b-kube-api-access-j2rrx\") pod \"725581bd-6264-4ca6-b1fa-126c3c50800b\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.065378 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-openstack-config-secret\") pod \"725581bd-6264-4ca6-b1fa-126c3c50800b\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.065469 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/725581bd-6264-4ca6-b1fa-126c3c50800b-config-data\") pod \"725581bd-6264-4ca6-b1fa-126c3c50800b\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.065526 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"725581bd-6264-4ca6-b1fa-126c3c50800b\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.065591 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/725581bd-6264-4ca6-b1fa-126c3c50800b-test-operator-ephemeral-temporary\") pod \"725581bd-6264-4ca6-b1fa-126c3c50800b\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.065629 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-ssh-key\") pod \"725581bd-6264-4ca6-b1fa-126c3c50800b\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.065738 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/725581bd-6264-4ca6-b1fa-126c3c50800b-openstack-config\") pod \"725581bd-6264-4ca6-b1fa-126c3c50800b\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.065781 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-ca-certs\") pod \"725581bd-6264-4ca6-b1fa-126c3c50800b\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.065832 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/725581bd-6264-4ca6-b1fa-126c3c50800b-test-operator-ephemeral-workdir\") pod \"725581bd-6264-4ca6-b1fa-126c3c50800b\" (UID: \"725581bd-6264-4ca6-b1fa-126c3c50800b\") " Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.067204 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/725581bd-6264-4ca6-b1fa-126c3c50800b-config-data" (OuterVolumeSpecName: "config-data") pod "725581bd-6264-4ca6-b1fa-126c3c50800b" (UID: "725581bd-6264-4ca6-b1fa-126c3c50800b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.067670 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/725581bd-6264-4ca6-b1fa-126c3c50800b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "725581bd-6264-4ca6-b1fa-126c3c50800b" (UID: "725581bd-6264-4ca6-b1fa-126c3c50800b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.072154 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/725581bd-6264-4ca6-b1fa-126c3c50800b-kube-api-access-j2rrx" (OuterVolumeSpecName: "kube-api-access-j2rrx") pod "725581bd-6264-4ca6-b1fa-126c3c50800b" (UID: "725581bd-6264-4ca6-b1fa-126c3c50800b"). InnerVolumeSpecName "kube-api-access-j2rrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.078124 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "test-operator-logs") pod "725581bd-6264-4ca6-b1fa-126c3c50800b" (UID: "725581bd-6264-4ca6-b1fa-126c3c50800b"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.088006 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/725581bd-6264-4ca6-b1fa-126c3c50800b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "725581bd-6264-4ca6-b1fa-126c3c50800b" (UID: "725581bd-6264-4ca6-b1fa-126c3c50800b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.093048 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "725581bd-6264-4ca6-b1fa-126c3c50800b" (UID: "725581bd-6264-4ca6-b1fa-126c3c50800b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.106721 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "725581bd-6264-4ca6-b1fa-126c3c50800b" (UID: "725581bd-6264-4ca6-b1fa-126c3c50800b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.114551 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "725581bd-6264-4ca6-b1fa-126c3c50800b" (UID: "725581bd-6264-4ca6-b1fa-126c3c50800b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.117492 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/725581bd-6264-4ca6-b1fa-126c3c50800b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "725581bd-6264-4ca6-b1fa-126c3c50800b" (UID: "725581bd-6264-4ca6-b1fa-126c3c50800b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.167857 4711 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/725581bd-6264-4ca6-b1fa-126c3c50800b-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.169613 4711 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.169637 4711 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/725581bd-6264-4ca6-b1fa-126c3c50800b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.169652 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2rrx\" (UniqueName: \"kubernetes.io/projected/725581bd-6264-4ca6-b1fa-126c3c50800b-kube-api-access-j2rrx\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.169667 4711 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.169706 4711 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/725581bd-6264-4ca6-b1fa-126c3c50800b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.169769 4711 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.169783 4711 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/725581bd-6264-4ca6-b1fa-126c3c50800b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.169798 4711 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/725581bd-6264-4ca6-b1fa-126c3c50800b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.189936 4711 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.275131 4711 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.520009 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"725581bd-6264-4ca6-b1fa-126c3c50800b","Type":"ContainerDied","Data":"a12123d320a2f2d0c18c9e9ba5d92707969eb8fb921fc95eb684715bb5367b32"} Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.520319 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12123d320a2f2d0c18c9e9ba5d92707969eb8fb921fc95eb684715bb5367b32" Dec 02 11:10:00 crc kubenswrapper[4711]: I1202 11:10:00.520065 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.374187 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 02 11:10:11 crc kubenswrapper[4711]: E1202 11:10:11.375384 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5323e55-16c6-4f6f-bfef-a5da4d10c278" containerName="extract-content" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.375411 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5323e55-16c6-4f6f-bfef-a5da4d10c278" containerName="extract-content" Dec 02 11:10:11 crc kubenswrapper[4711]: E1202 11:10:11.375428 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5323e55-16c6-4f6f-bfef-a5da4d10c278" containerName="registry-server" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.375434 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5323e55-16c6-4f6f-bfef-a5da4d10c278" containerName="registry-server" Dec 02 11:10:11 crc kubenswrapper[4711]: E1202 11:10:11.375450 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5323e55-16c6-4f6f-bfef-a5da4d10c278" containerName="extract-utilities" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.375459 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5323e55-16c6-4f6f-bfef-a5da4d10c278" containerName="extract-utilities" Dec 02 11:10:11 crc kubenswrapper[4711]: E1202 11:10:11.375474 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="725581bd-6264-4ca6-b1fa-126c3c50800b" containerName="tempest-tests-tempest-tests-runner" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.375480 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="725581bd-6264-4ca6-b1fa-126c3c50800b" containerName="tempest-tests-tempest-tests-runner" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.375653 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="725581bd-6264-4ca6-b1fa-126c3c50800b" containerName="tempest-tests-tempest-tests-runner" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.375669 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5323e55-16c6-4f6f-bfef-a5da4d10c278" containerName="registry-server" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.376387 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.381882 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bmhll" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.397907 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.512346 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb5hd\" (UniqueName: \"kubernetes.io/projected/76f6fd71-c403-415b-9402-f3fcd9ab0fd4-kube-api-access-kb5hd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"76f6fd71-c403-415b-9402-f3fcd9ab0fd4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.512445 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"76f6fd71-c403-415b-9402-f3fcd9ab0fd4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.614552 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb5hd\" (UniqueName: \"kubernetes.io/projected/76f6fd71-c403-415b-9402-f3fcd9ab0fd4-kube-api-access-kb5hd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"76f6fd71-c403-415b-9402-f3fcd9ab0fd4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.614607 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"76f6fd71-c403-415b-9402-f3fcd9ab0fd4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.615413 4711 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"76f6fd71-c403-415b-9402-f3fcd9ab0fd4\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.641803 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb5hd\" (UniqueName: \"kubernetes.io/projected/76f6fd71-c403-415b-9402-f3fcd9ab0fd4-kube-api-access-kb5hd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"76f6fd71-c403-415b-9402-f3fcd9ab0fd4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.661347 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"76f6fd71-c403-415b-9402-f3fcd9ab0fd4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:10:11 crc kubenswrapper[4711]: I1202 11:10:11.706750 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:10:12 crc kubenswrapper[4711]: I1202 11:10:12.175145 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 02 11:10:12 crc kubenswrapper[4711]: W1202 11:10:12.183851 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76f6fd71_c403_415b_9402_f3fcd9ab0fd4.slice/crio-2cac3503596b11ead04099c9681004c63d436ebb8330d1ecb8b46d1079261c30 WatchSource:0}: Error finding container 2cac3503596b11ead04099c9681004c63d436ebb8330d1ecb8b46d1079261c30: Status 404 returned error can't find the container with id 2cac3503596b11ead04099c9681004c63d436ebb8330d1ecb8b46d1079261c30 Dec 02 11:10:12 crc kubenswrapper[4711]: I1202 11:10:12.665892 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"76f6fd71-c403-415b-9402-f3fcd9ab0fd4","Type":"ContainerStarted","Data":"2cac3503596b11ead04099c9681004c63d436ebb8330d1ecb8b46d1079261c30"} Dec 02 11:10:13 crc kubenswrapper[4711]: I1202 11:10:13.674844 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"76f6fd71-c403-415b-9402-f3fcd9ab0fd4","Type":"ContainerStarted","Data":"a75d77b1280ab38fce58f7c57ddcc462a0ef27f039b7f0228664c2b6e2a41ece"} Dec 02 11:10:22 crc kubenswrapper[4711]: I1202 11:10:22.586214 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:10:22 crc kubenswrapper[4711]: I1202 11:10:22.586816 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:10:30 crc kubenswrapper[4711]: I1202 11:10:30.794833 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=18.649995728 podStartE2EDuration="19.794778659s" podCreationTimestamp="2025-12-02 11:10:11 +0000 UTC" firstStartedPulling="2025-12-02 11:10:12.190353575 +0000 UTC m=+3401.899720012" lastFinishedPulling="2025-12-02 11:10:13.335136486 +0000 UTC m=+3403.044502943" observedRunningTime="2025-12-02 11:10:13.68889503 +0000 UTC m=+3403.398261477" watchObservedRunningTime="2025-12-02 11:10:30.794778659 +0000 UTC m=+3420.504145146" Dec 02 11:10:30 crc kubenswrapper[4711]: I1202 11:10:30.797887 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4sf4q"] Dec 02 11:10:30 crc kubenswrapper[4711]: I1202 11:10:30.800672 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:30 crc kubenswrapper[4711]: I1202 11:10:30.812926 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4sf4q"] Dec 02 11:10:30 crc kubenswrapper[4711]: I1202 11:10:30.932457 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqh2c\" (UniqueName: \"kubernetes.io/projected/169b86c8-1de7-47bb-823c-dbbba0f59ee3-kube-api-access-dqh2c\") pod \"certified-operators-4sf4q\" (UID: \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\") " pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:30 crc kubenswrapper[4711]: I1202 11:10:30.932686 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b86c8-1de7-47bb-823c-dbbba0f59ee3-utilities\") pod \"certified-operators-4sf4q\" (UID: \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\") " pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:30 crc kubenswrapper[4711]: I1202 11:10:30.932731 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b86c8-1de7-47bb-823c-dbbba0f59ee3-catalog-content\") pod \"certified-operators-4sf4q\" (UID: \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\") " pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:31 crc kubenswrapper[4711]: I1202 11:10:31.034410 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b86c8-1de7-47bb-823c-dbbba0f59ee3-catalog-content\") pod \"certified-operators-4sf4q\" (UID: \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\") " pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:31 crc kubenswrapper[4711]: I1202 11:10:31.034476 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqh2c\" (UniqueName: \"kubernetes.io/projected/169b86c8-1de7-47bb-823c-dbbba0f59ee3-kube-api-access-dqh2c\") pod \"certified-operators-4sf4q\" (UID: \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\") " pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:31 crc kubenswrapper[4711]: I1202 11:10:31.034603 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b86c8-1de7-47bb-823c-dbbba0f59ee3-utilities\") pod \"certified-operators-4sf4q\" (UID: \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\") " pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:31 crc kubenswrapper[4711]: I1202 11:10:31.035040 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b86c8-1de7-47bb-823c-dbbba0f59ee3-catalog-content\") pod \"certified-operators-4sf4q\" (UID: \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\") " pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:31 crc kubenswrapper[4711]: I1202 11:10:31.035076 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b86c8-1de7-47bb-823c-dbbba0f59ee3-utilities\") pod \"certified-operators-4sf4q\" (UID: \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\") " pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:31 crc kubenswrapper[4711]: I1202 11:10:31.058180 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqh2c\" (UniqueName: \"kubernetes.io/projected/169b86c8-1de7-47bb-823c-dbbba0f59ee3-kube-api-access-dqh2c\") pod \"certified-operators-4sf4q\" (UID: \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\") " pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:31 crc kubenswrapper[4711]: I1202 11:10:31.126360 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:31 crc kubenswrapper[4711]: W1202 11:10:31.707814 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod169b86c8_1de7_47bb_823c_dbbba0f59ee3.slice/crio-58f15cc8836c292ba527305f2d6e617bbc869111327135b4372194005c5b1415 WatchSource:0}: Error finding container 58f15cc8836c292ba527305f2d6e617bbc869111327135b4372194005c5b1415: Status 404 returned error can't find the container with id 58f15cc8836c292ba527305f2d6e617bbc869111327135b4372194005c5b1415 Dec 02 11:10:31 crc kubenswrapper[4711]: I1202 11:10:31.709167 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4sf4q"] Dec 02 11:10:31 crc kubenswrapper[4711]: I1202 11:10:31.868318 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sf4q" event={"ID":"169b86c8-1de7-47bb-823c-dbbba0f59ee3","Type":"ContainerStarted","Data":"58f15cc8836c292ba527305f2d6e617bbc869111327135b4372194005c5b1415"} Dec 02 11:10:32 crc kubenswrapper[4711]: I1202 11:10:32.881451 4711 generic.go:334] "Generic (PLEG): container finished" podID="169b86c8-1de7-47bb-823c-dbbba0f59ee3" containerID="e8b69e48d85eb16e0ce65e60b6de9256f9ac6e4be20ea1a0fc2e24aac656a346" exitCode=0 Dec 02 11:10:32 crc kubenswrapper[4711]: I1202 11:10:32.881550 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sf4q" event={"ID":"169b86c8-1de7-47bb-823c-dbbba0f59ee3","Type":"ContainerDied","Data":"e8b69e48d85eb16e0ce65e60b6de9256f9ac6e4be20ea1a0fc2e24aac656a346"} Dec 02 11:10:34 crc kubenswrapper[4711]: I1202 11:10:34.907976 4711 generic.go:334] "Generic (PLEG): container finished" podID="169b86c8-1de7-47bb-823c-dbbba0f59ee3" containerID="c65bf01f27984b7fc0e63207c5af55c57119ecaa39a6124cef7dd2fa47f5485b" exitCode=0 Dec 02 11:10:34 crc kubenswrapper[4711]: I1202 11:10:34.908287 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sf4q" event={"ID":"169b86c8-1de7-47bb-823c-dbbba0f59ee3","Type":"ContainerDied","Data":"c65bf01f27984b7fc0e63207c5af55c57119ecaa39a6124cef7dd2fa47f5485b"} Dec 02 11:10:35 crc kubenswrapper[4711]: I1202 11:10:35.920723 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sf4q" event={"ID":"169b86c8-1de7-47bb-823c-dbbba0f59ee3","Type":"ContainerStarted","Data":"e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95"} Dec 02 11:10:35 crc kubenswrapper[4711]: I1202 11:10:35.940216 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4sf4q" podStartSLOduration=3.3267882970000002 podStartE2EDuration="5.94019165s" podCreationTimestamp="2025-12-02 11:10:30 +0000 UTC" firstStartedPulling="2025-12-02 11:10:32.885360792 +0000 UTC m=+3422.594727249" lastFinishedPulling="2025-12-02 11:10:35.498764155 +0000 UTC m=+3425.208130602" observedRunningTime="2025-12-02 11:10:35.9375517 +0000 UTC m=+3425.646918147" watchObservedRunningTime="2025-12-02 11:10:35.94019165 +0000 UTC m=+3425.649558117" Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.344454 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j2zv5/must-gather-pfnmv"] Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.346716 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/must-gather-pfnmv" Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.349322 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-j2zv5"/"default-dockercfg-vljmr" Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.349553 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-j2zv5"/"kube-root-ca.crt" Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.349758 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-j2zv5"/"openshift-service-ca.crt" Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.365009 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-j2zv5/must-gather-pfnmv"] Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.388074 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bbb0b11b-c436-4c78-bb82-ecac75bb40ab-must-gather-output\") pod \"must-gather-pfnmv\" (UID: \"bbb0b11b-c436-4c78-bb82-ecac75bb40ab\") " pod="openshift-must-gather-j2zv5/must-gather-pfnmv" Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.388209 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fbw7\" (UniqueName: \"kubernetes.io/projected/bbb0b11b-c436-4c78-bb82-ecac75bb40ab-kube-api-access-2fbw7\") pod \"must-gather-pfnmv\" (UID: \"bbb0b11b-c436-4c78-bb82-ecac75bb40ab\") " pod="openshift-must-gather-j2zv5/must-gather-pfnmv" Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.489533 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bbb0b11b-c436-4c78-bb82-ecac75bb40ab-must-gather-output\") pod \"must-gather-pfnmv\" (UID: \"bbb0b11b-c436-4c78-bb82-ecac75bb40ab\") " pod="openshift-must-gather-j2zv5/must-gather-pfnmv" Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.489972 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fbw7\" (UniqueName: \"kubernetes.io/projected/bbb0b11b-c436-4c78-bb82-ecac75bb40ab-kube-api-access-2fbw7\") pod \"must-gather-pfnmv\" (UID: \"bbb0b11b-c436-4c78-bb82-ecac75bb40ab\") " pod="openshift-must-gather-j2zv5/must-gather-pfnmv" Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.490299 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bbb0b11b-c436-4c78-bb82-ecac75bb40ab-must-gather-output\") pod \"must-gather-pfnmv\" (UID: \"bbb0b11b-c436-4c78-bb82-ecac75bb40ab\") " pod="openshift-must-gather-j2zv5/must-gather-pfnmv" Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.508122 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fbw7\" (UniqueName: \"kubernetes.io/projected/bbb0b11b-c436-4c78-bb82-ecac75bb40ab-kube-api-access-2fbw7\") pod \"must-gather-pfnmv\" (UID: \"bbb0b11b-c436-4c78-bb82-ecac75bb40ab\") " pod="openshift-must-gather-j2zv5/must-gather-pfnmv" Dec 02 11:10:37 crc kubenswrapper[4711]: I1202 11:10:37.676015 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/must-gather-pfnmv" Dec 02 11:10:38 crc kubenswrapper[4711]: I1202 11:10:38.133484 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-j2zv5/must-gather-pfnmv"] Dec 02 11:10:38 crc kubenswrapper[4711]: W1202 11:10:38.141293 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbb0b11b_c436_4c78_bb82_ecac75bb40ab.slice/crio-bfcd256c1003580608e636ee9b7be77aabe90ed7c5912bfcb67b7250321f389c WatchSource:0}: Error finding container bfcd256c1003580608e636ee9b7be77aabe90ed7c5912bfcb67b7250321f389c: Status 404 returned error can't find the container with id bfcd256c1003580608e636ee9b7be77aabe90ed7c5912bfcb67b7250321f389c Dec 02 11:10:38 crc kubenswrapper[4711]: I1202 11:10:38.953530 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2zv5/must-gather-pfnmv" event={"ID":"bbb0b11b-c436-4c78-bb82-ecac75bb40ab","Type":"ContainerStarted","Data":"bfcd256c1003580608e636ee9b7be77aabe90ed7c5912bfcb67b7250321f389c"} Dec 02 11:10:41 crc kubenswrapper[4711]: I1202 11:10:41.126633 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:41 crc kubenswrapper[4711]: I1202 11:10:41.127021 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:41 crc kubenswrapper[4711]: I1202 11:10:41.198215 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:42 crc kubenswrapper[4711]: I1202 11:10:42.108534 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:42 crc kubenswrapper[4711]: I1202 11:10:42.160576 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4sf4q"] Dec 02 11:10:44 crc kubenswrapper[4711]: I1202 11:10:44.023002 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4sf4q" podUID="169b86c8-1de7-47bb-823c-dbbba0f59ee3" containerName="registry-server" containerID="cri-o://e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95" gracePeriod=2 Dec 02 11:10:44 crc kubenswrapper[4711]: I1202 11:10:44.459263 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:44 crc kubenswrapper[4711]: I1202 11:10:44.543063 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqh2c\" (UniqueName: \"kubernetes.io/projected/169b86c8-1de7-47bb-823c-dbbba0f59ee3-kube-api-access-dqh2c\") pod \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\" (UID: \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\") " Dec 02 11:10:44 crc kubenswrapper[4711]: I1202 11:10:44.543108 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b86c8-1de7-47bb-823c-dbbba0f59ee3-utilities\") pod \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\" (UID: \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\") " Dec 02 11:10:44 crc kubenswrapper[4711]: I1202 11:10:44.543146 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b86c8-1de7-47bb-823c-dbbba0f59ee3-catalog-content\") pod \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\" (UID: \"169b86c8-1de7-47bb-823c-dbbba0f59ee3\") " Dec 02 11:10:44 crc kubenswrapper[4711]: I1202 11:10:44.543932 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169b86c8-1de7-47bb-823c-dbbba0f59ee3-utilities" (OuterVolumeSpecName: "utilities") pod "169b86c8-1de7-47bb-823c-dbbba0f59ee3" (UID: "169b86c8-1de7-47bb-823c-dbbba0f59ee3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:10:44 crc kubenswrapper[4711]: I1202 11:10:44.549427 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169b86c8-1de7-47bb-823c-dbbba0f59ee3-kube-api-access-dqh2c" (OuterVolumeSpecName: "kube-api-access-dqh2c") pod "169b86c8-1de7-47bb-823c-dbbba0f59ee3" (UID: "169b86c8-1de7-47bb-823c-dbbba0f59ee3"). InnerVolumeSpecName "kube-api-access-dqh2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:10:44 crc kubenswrapper[4711]: I1202 11:10:44.587881 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169b86c8-1de7-47bb-823c-dbbba0f59ee3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "169b86c8-1de7-47bb-823c-dbbba0f59ee3" (UID: "169b86c8-1de7-47bb-823c-dbbba0f59ee3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:10:44 crc kubenswrapper[4711]: I1202 11:10:44.645330 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqh2c\" (UniqueName: \"kubernetes.io/projected/169b86c8-1de7-47bb-823c-dbbba0f59ee3-kube-api-access-dqh2c\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:44 crc kubenswrapper[4711]: I1202 11:10:44.645364 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b86c8-1de7-47bb-823c-dbbba0f59ee3-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:44 crc kubenswrapper[4711]: I1202 11:10:44.645376 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b86c8-1de7-47bb-823c-dbbba0f59ee3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.035650 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2zv5/must-gather-pfnmv" event={"ID":"bbb0b11b-c436-4c78-bb82-ecac75bb40ab","Type":"ContainerStarted","Data":"6d5d2796579123a68c420c12bd394623e6a924f35eae5f49aa7b6b3252335def"} Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.035715 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2zv5/must-gather-pfnmv" event={"ID":"bbb0b11b-c436-4c78-bb82-ecac75bb40ab","Type":"ContainerStarted","Data":"c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d"} Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.038837 4711 generic.go:334] "Generic (PLEG): container finished" podID="169b86c8-1de7-47bb-823c-dbbba0f59ee3" containerID="e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95" exitCode=0 Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.038913 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sf4q" event={"ID":"169b86c8-1de7-47bb-823c-dbbba0f59ee3","Type":"ContainerDied","Data":"e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95"} Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.038946 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sf4q" Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.039017 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sf4q" event={"ID":"169b86c8-1de7-47bb-823c-dbbba0f59ee3","Type":"ContainerDied","Data":"58f15cc8836c292ba527305f2d6e617bbc869111327135b4372194005c5b1415"} Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.039054 4711 scope.go:117] "RemoveContainer" containerID="e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95" Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.065500 4711 scope.go:117] "RemoveContainer" containerID="c65bf01f27984b7fc0e63207c5af55c57119ecaa39a6124cef7dd2fa47f5485b" Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.074523 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-j2zv5/must-gather-pfnmv" podStartSLOduration=2.309406043 podStartE2EDuration="8.074507396s" podCreationTimestamp="2025-12-02 11:10:37 +0000 UTC" firstStartedPulling="2025-12-02 11:10:38.145873001 +0000 UTC m=+3427.855239458" lastFinishedPulling="2025-12-02 11:10:43.910974354 +0000 UTC m=+3433.620340811" observedRunningTime="2025-12-02 11:10:45.068717501 +0000 UTC m=+3434.778083948" watchObservedRunningTime="2025-12-02 11:10:45.074507396 +0000 UTC m=+3434.783873843" Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.114686 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4sf4q"] Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.130941 4711 scope.go:117] "RemoveContainer" containerID="e8b69e48d85eb16e0ce65e60b6de9256f9ac6e4be20ea1a0fc2e24aac656a346" Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.145886 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4sf4q"] Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.151183 4711 scope.go:117] "RemoveContainer" containerID="e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95" Dec 02 11:10:45 crc kubenswrapper[4711]: E1202 11:10:45.152065 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95\": container with ID starting with e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95 not found: ID does not exist" containerID="e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95" Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.152118 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95"} err="failed to get container status \"e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95\": rpc error: code = NotFound desc = could not find container \"e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95\": container with ID starting with e2984aa40f0eaaabd6db5a2aa5d94bea929967d8122150a2bd1af26f395f3d95 not found: ID does not exist" Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.152152 4711 scope.go:117] "RemoveContainer" containerID="c65bf01f27984b7fc0e63207c5af55c57119ecaa39a6124cef7dd2fa47f5485b" Dec 02 11:10:45 crc kubenswrapper[4711]: E1202 11:10:45.152747 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65bf01f27984b7fc0e63207c5af55c57119ecaa39a6124cef7dd2fa47f5485b\": container with ID starting with c65bf01f27984b7fc0e63207c5af55c57119ecaa39a6124cef7dd2fa47f5485b not found: ID does not exist" containerID="c65bf01f27984b7fc0e63207c5af55c57119ecaa39a6124cef7dd2fa47f5485b" Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.152801 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65bf01f27984b7fc0e63207c5af55c57119ecaa39a6124cef7dd2fa47f5485b"} err="failed to get container status \"c65bf01f27984b7fc0e63207c5af55c57119ecaa39a6124cef7dd2fa47f5485b\": rpc error: code = NotFound desc = could not find container \"c65bf01f27984b7fc0e63207c5af55c57119ecaa39a6124cef7dd2fa47f5485b\": container with ID starting with c65bf01f27984b7fc0e63207c5af55c57119ecaa39a6124cef7dd2fa47f5485b not found: ID does not exist" Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.152903 4711 scope.go:117] "RemoveContainer" containerID="e8b69e48d85eb16e0ce65e60b6de9256f9ac6e4be20ea1a0fc2e24aac656a346" Dec 02 11:10:45 crc kubenswrapper[4711]: E1202 11:10:45.153746 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8b69e48d85eb16e0ce65e60b6de9256f9ac6e4be20ea1a0fc2e24aac656a346\": container with ID starting with e8b69e48d85eb16e0ce65e60b6de9256f9ac6e4be20ea1a0fc2e24aac656a346 not found: ID does not exist" containerID="e8b69e48d85eb16e0ce65e60b6de9256f9ac6e4be20ea1a0fc2e24aac656a346" Dec 02 11:10:45 crc kubenswrapper[4711]: I1202 11:10:45.153783 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8b69e48d85eb16e0ce65e60b6de9256f9ac6e4be20ea1a0fc2e24aac656a346"} err="failed to get container status \"e8b69e48d85eb16e0ce65e60b6de9256f9ac6e4be20ea1a0fc2e24aac656a346\": rpc error: code = NotFound desc = could not find container \"e8b69e48d85eb16e0ce65e60b6de9256f9ac6e4be20ea1a0fc2e24aac656a346\": container with ID starting with e8b69e48d85eb16e0ce65e60b6de9256f9ac6e4be20ea1a0fc2e24aac656a346 not found: ID does not exist" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.089902 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169b86c8-1de7-47bb-823c-dbbba0f59ee3" path="/var/lib/kubelet/pods/169b86c8-1de7-47bb-823c-dbbba0f59ee3/volumes" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.681495 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j2zv5/crc-debug-2hm7t"] Dec 02 11:10:47 crc kubenswrapper[4711]: E1202 11:10:47.681808 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169b86c8-1de7-47bb-823c-dbbba0f59ee3" containerName="registry-server" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.681819 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="169b86c8-1de7-47bb-823c-dbbba0f59ee3" containerName="registry-server" Dec 02 11:10:47 crc kubenswrapper[4711]: E1202 11:10:47.681833 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169b86c8-1de7-47bb-823c-dbbba0f59ee3" containerName="extract-utilities" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.681840 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="169b86c8-1de7-47bb-823c-dbbba0f59ee3" containerName="extract-utilities" Dec 02 11:10:47 crc kubenswrapper[4711]: E1202 11:10:47.681850 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169b86c8-1de7-47bb-823c-dbbba0f59ee3" containerName="extract-content" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.681856 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="169b86c8-1de7-47bb-823c-dbbba0f59ee3" containerName="extract-content" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.691636 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="169b86c8-1de7-47bb-823c-dbbba0f59ee3" containerName="registry-server" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.692246 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.698870 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbqql\" (UniqueName: \"kubernetes.io/projected/ab2c05a1-353a-41fa-83e8-b8ea401b2767-kube-api-access-dbqql\") pod \"crc-debug-2hm7t\" (UID: \"ab2c05a1-353a-41fa-83e8-b8ea401b2767\") " pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.699104 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2c05a1-353a-41fa-83e8-b8ea401b2767-host\") pod \"crc-debug-2hm7t\" (UID: \"ab2c05a1-353a-41fa-83e8-b8ea401b2767\") " pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.803108 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbqql\" (UniqueName: \"kubernetes.io/projected/ab2c05a1-353a-41fa-83e8-b8ea401b2767-kube-api-access-dbqql\") pod \"crc-debug-2hm7t\" (UID: \"ab2c05a1-353a-41fa-83e8-b8ea401b2767\") " pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.803845 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2c05a1-353a-41fa-83e8-b8ea401b2767-host\") pod \"crc-debug-2hm7t\" (UID: \"ab2c05a1-353a-41fa-83e8-b8ea401b2767\") " pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.803909 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2c05a1-353a-41fa-83e8-b8ea401b2767-host\") pod \"crc-debug-2hm7t\" (UID: \"ab2c05a1-353a-41fa-83e8-b8ea401b2767\") " pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" Dec 02 11:10:47 crc kubenswrapper[4711]: I1202 11:10:47.827076 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbqql\" (UniqueName: \"kubernetes.io/projected/ab2c05a1-353a-41fa-83e8-b8ea401b2767-kube-api-access-dbqql\") pod \"crc-debug-2hm7t\" (UID: \"ab2c05a1-353a-41fa-83e8-b8ea401b2767\") " pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" Dec 02 11:10:48 crc kubenswrapper[4711]: I1202 11:10:48.012918 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" Dec 02 11:10:48 crc kubenswrapper[4711]: W1202 11:10:48.053398 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab2c05a1_353a_41fa_83e8_b8ea401b2767.slice/crio-a84cd3d902c699bb836460c2b14f4da5ca9a992092aa8cd19b9e5c42ee456824 WatchSource:0}: Error finding container a84cd3d902c699bb836460c2b14f4da5ca9a992092aa8cd19b9e5c42ee456824: Status 404 returned error can't find the container with id a84cd3d902c699bb836460c2b14f4da5ca9a992092aa8cd19b9e5c42ee456824 Dec 02 11:10:48 crc kubenswrapper[4711]: I1202 11:10:48.085036 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" event={"ID":"ab2c05a1-353a-41fa-83e8-b8ea401b2767","Type":"ContainerStarted","Data":"a84cd3d902c699bb836460c2b14f4da5ca9a992092aa8cd19b9e5c42ee456824"} Dec 02 11:10:52 crc kubenswrapper[4711]: I1202 11:10:52.586101 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:10:52 crc kubenswrapper[4711]: I1202 11:10:52.586983 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:10:52 crc kubenswrapper[4711]: I1202 11:10:52.587062 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 11:10:52 crc kubenswrapper[4711]: I1202 11:10:52.589032 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bcb5ee66f74bc7fa910b7fd68c8b56646b704858f1b9b5bdf111d82410fbe2fc"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:10:52 crc kubenswrapper[4711]: I1202 11:10:52.589129 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://bcb5ee66f74bc7fa910b7fd68c8b56646b704858f1b9b5bdf111d82410fbe2fc" gracePeriod=600 Dec 02 11:10:53 crc kubenswrapper[4711]: I1202 11:10:53.129184 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="bcb5ee66f74bc7fa910b7fd68c8b56646b704858f1b9b5bdf111d82410fbe2fc" exitCode=0 Dec 02 11:10:53 crc kubenswrapper[4711]: I1202 11:10:53.129233 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"bcb5ee66f74bc7fa910b7fd68c8b56646b704858f1b9b5bdf111d82410fbe2fc"} Dec 02 11:10:53 crc kubenswrapper[4711]: I1202 11:10:53.129271 4711 scope.go:117] "RemoveContainer" containerID="b5ccfc95c7893584422e4b00784ea40be4c8ebb3ee763ac71e8b2de294a8282d" Dec 02 11:10:59 crc kubenswrapper[4711]: I1202 11:10:59.200114 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530"} Dec 02 11:10:59 crc kubenswrapper[4711]: I1202 11:10:59.202576 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" event={"ID":"ab2c05a1-353a-41fa-83e8-b8ea401b2767","Type":"ContainerStarted","Data":"c70b9a11f3e02b0be693261ae5a8f4451927ba3b31afe0f058c274f821f9007b"} Dec 02 11:10:59 crc kubenswrapper[4711]: I1202 11:10:59.243390 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" podStartSLOduration=1.6218982309999999 podStartE2EDuration="12.243369785s" podCreationTimestamp="2025-12-02 11:10:47 +0000 UTC" firstStartedPulling="2025-12-02 11:10:48.070519363 +0000 UTC m=+3437.779885810" lastFinishedPulling="2025-12-02 11:10:58.691990917 +0000 UTC m=+3448.401357364" observedRunningTime="2025-12-02 11:10:59.236808179 +0000 UTC m=+3448.946174626" watchObservedRunningTime="2025-12-02 11:10:59.243369785 +0000 UTC m=+3448.952736232" Dec 02 11:11:36 crc kubenswrapper[4711]: I1202 11:11:36.545673 4711 generic.go:334] "Generic (PLEG): container finished" podID="ab2c05a1-353a-41fa-83e8-b8ea401b2767" containerID="c70b9a11f3e02b0be693261ae5a8f4451927ba3b31afe0f058c274f821f9007b" exitCode=0 Dec 02 11:11:36 crc kubenswrapper[4711]: I1202 11:11:36.546241 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" event={"ID":"ab2c05a1-353a-41fa-83e8-b8ea401b2767","Type":"ContainerDied","Data":"c70b9a11f3e02b0be693261ae5a8f4451927ba3b31afe0f058c274f821f9007b"} Dec 02 11:11:37 crc kubenswrapper[4711]: I1202 11:11:37.682501 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" Dec 02 11:11:37 crc kubenswrapper[4711]: I1202 11:11:37.716817 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j2zv5/crc-debug-2hm7t"] Dec 02 11:11:37 crc kubenswrapper[4711]: I1202 11:11:37.726504 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j2zv5/crc-debug-2hm7t"] Dec 02 11:11:37 crc kubenswrapper[4711]: I1202 11:11:37.753775 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2c05a1-353a-41fa-83e8-b8ea401b2767-host\") pod \"ab2c05a1-353a-41fa-83e8-b8ea401b2767\" (UID: \"ab2c05a1-353a-41fa-83e8-b8ea401b2767\") " Dec 02 11:11:37 crc kubenswrapper[4711]: I1202 11:11:37.753984 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbqql\" (UniqueName: \"kubernetes.io/projected/ab2c05a1-353a-41fa-83e8-b8ea401b2767-kube-api-access-dbqql\") pod \"ab2c05a1-353a-41fa-83e8-b8ea401b2767\" (UID: \"ab2c05a1-353a-41fa-83e8-b8ea401b2767\") " Dec 02 11:11:37 crc kubenswrapper[4711]: I1202 11:11:37.754111 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2c05a1-353a-41fa-83e8-b8ea401b2767-host" (OuterVolumeSpecName: "host") pod "ab2c05a1-353a-41fa-83e8-b8ea401b2767" (UID: "ab2c05a1-353a-41fa-83e8-b8ea401b2767"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:11:37 crc kubenswrapper[4711]: I1202 11:11:37.754855 4711 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab2c05a1-353a-41fa-83e8-b8ea401b2767-host\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:37 crc kubenswrapper[4711]: I1202 11:11:37.759603 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2c05a1-353a-41fa-83e8-b8ea401b2767-kube-api-access-dbqql" (OuterVolumeSpecName: "kube-api-access-dbqql") pod "ab2c05a1-353a-41fa-83e8-b8ea401b2767" (UID: "ab2c05a1-353a-41fa-83e8-b8ea401b2767"). InnerVolumeSpecName "kube-api-access-dbqql". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:11:37 crc kubenswrapper[4711]: I1202 11:11:37.856264 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbqql\" (UniqueName: \"kubernetes.io/projected/ab2c05a1-353a-41fa-83e8-b8ea401b2767-kube-api-access-dbqql\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:38 crc kubenswrapper[4711]: I1202 11:11:38.569351 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a84cd3d902c699bb836460c2b14f4da5ca9a992092aa8cd19b9e5c42ee456824" Dec 02 11:11:38 crc kubenswrapper[4711]: I1202 11:11:38.569763 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/crc-debug-2hm7t" Dec 02 11:11:38 crc kubenswrapper[4711]: I1202 11:11:38.975671 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j2zv5/crc-debug-q9tj9"] Dec 02 11:11:38 crc kubenswrapper[4711]: E1202 11:11:38.976687 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2c05a1-353a-41fa-83e8-b8ea401b2767" containerName="container-00" Dec 02 11:11:38 crc kubenswrapper[4711]: I1202 11:11:38.976720 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2c05a1-353a-41fa-83e8-b8ea401b2767" containerName="container-00" Dec 02 11:11:38 crc kubenswrapper[4711]: I1202 11:11:38.977051 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2c05a1-353a-41fa-83e8-b8ea401b2767" containerName="container-00" Dec 02 11:11:38 crc kubenswrapper[4711]: I1202 11:11:38.978077 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/crc-debug-q9tj9" Dec 02 11:11:39 crc kubenswrapper[4711]: I1202 11:11:39.080124 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j8tx\" (UniqueName: \"kubernetes.io/projected/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4-kube-api-access-4j8tx\") pod \"crc-debug-q9tj9\" (UID: \"95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4\") " pod="openshift-must-gather-j2zv5/crc-debug-q9tj9" Dec 02 11:11:39 crc kubenswrapper[4711]: I1202 11:11:39.080330 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4-host\") pod \"crc-debug-q9tj9\" (UID: \"95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4\") " pod="openshift-must-gather-j2zv5/crc-debug-q9tj9" Dec 02 11:11:39 crc kubenswrapper[4711]: I1202 11:11:39.103942 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2c05a1-353a-41fa-83e8-b8ea401b2767" path="/var/lib/kubelet/pods/ab2c05a1-353a-41fa-83e8-b8ea401b2767/volumes" Dec 02 11:11:39 crc kubenswrapper[4711]: I1202 11:11:39.181689 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4-host\") pod \"crc-debug-q9tj9\" (UID: \"95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4\") " pod="openshift-must-gather-j2zv5/crc-debug-q9tj9" Dec 02 11:11:39 crc kubenswrapper[4711]: I1202 11:11:39.181866 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4-host\") pod \"crc-debug-q9tj9\" (UID: \"95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4\") " pod="openshift-must-gather-j2zv5/crc-debug-q9tj9" Dec 02 11:11:39 crc kubenswrapper[4711]: I1202 11:11:39.182322 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j8tx\" (UniqueName: \"kubernetes.io/projected/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4-kube-api-access-4j8tx\") pod \"crc-debug-q9tj9\" (UID: \"95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4\") " pod="openshift-must-gather-j2zv5/crc-debug-q9tj9" Dec 02 11:11:39 crc kubenswrapper[4711]: I1202 11:11:39.203033 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j8tx\" (UniqueName: \"kubernetes.io/projected/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4-kube-api-access-4j8tx\") pod \"crc-debug-q9tj9\" (UID: \"95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4\") " pod="openshift-must-gather-j2zv5/crc-debug-q9tj9" Dec 02 11:11:39 crc kubenswrapper[4711]: I1202 11:11:39.308831 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/crc-debug-q9tj9" Dec 02 11:11:39 crc kubenswrapper[4711]: I1202 11:11:39.585521 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2zv5/crc-debug-q9tj9" event={"ID":"95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4","Type":"ContainerStarted","Data":"ca0c5bf8a19f70aaafb64f5b9f789c307fc647c358b25e580dabfb709fe8c910"} Dec 02 11:11:40 crc kubenswrapper[4711]: I1202 11:11:40.602352 4711 generic.go:334] "Generic (PLEG): container finished" podID="95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4" containerID="388cabcfe5dfb13e2115fea592c15cc85e9a24b92ae89d996ed73aff942d1de1" exitCode=0 Dec 02 11:11:40 crc kubenswrapper[4711]: I1202 11:11:40.602473 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2zv5/crc-debug-q9tj9" event={"ID":"95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4","Type":"ContainerDied","Data":"388cabcfe5dfb13e2115fea592c15cc85e9a24b92ae89d996ed73aff942d1de1"} Dec 02 11:11:41 crc kubenswrapper[4711]: I1202 11:11:41.170389 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j2zv5/crc-debug-q9tj9"] Dec 02 11:11:41 crc kubenswrapper[4711]: I1202 11:11:41.177748 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j2zv5/crc-debug-q9tj9"] Dec 02 11:11:41 crc kubenswrapper[4711]: I1202 11:11:41.745767 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/crc-debug-q9tj9" Dec 02 11:11:41 crc kubenswrapper[4711]: I1202 11:11:41.939505 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j8tx\" (UniqueName: \"kubernetes.io/projected/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4-kube-api-access-4j8tx\") pod \"95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4\" (UID: \"95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4\") " Dec 02 11:11:41 crc kubenswrapper[4711]: I1202 11:11:41.939552 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4-host\") pod \"95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4\" (UID: \"95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4\") " Dec 02 11:11:41 crc kubenswrapper[4711]: I1202 11:11:41.940091 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4-host" (OuterVolumeSpecName: "host") pod "95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4" (UID: "95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:11:41 crc kubenswrapper[4711]: I1202 11:11:41.949525 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4-kube-api-access-4j8tx" (OuterVolumeSpecName: "kube-api-access-4j8tx") pod "95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4" (UID: "95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4"). InnerVolumeSpecName "kube-api-access-4j8tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.042269 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j8tx\" (UniqueName: \"kubernetes.io/projected/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4-kube-api-access-4j8tx\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.042666 4711 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4-host\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.346464 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j2zv5/crc-debug-gwk66"] Dec 02 11:11:42 crc kubenswrapper[4711]: E1202 11:11:42.346913 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4" containerName="container-00" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.346928 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4" containerName="container-00" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.347243 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4" containerName="container-00" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.348166 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/crc-debug-gwk66" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.449764 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0-host\") pod \"crc-debug-gwk66\" (UID: \"853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0\") " pod="openshift-must-gather-j2zv5/crc-debug-gwk66" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.450199 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljkkl\" (UniqueName: \"kubernetes.io/projected/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0-kube-api-access-ljkkl\") pod \"crc-debug-gwk66\" (UID: \"853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0\") " pod="openshift-must-gather-j2zv5/crc-debug-gwk66" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.552436 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0-host\") pod \"crc-debug-gwk66\" (UID: \"853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0\") " pod="openshift-must-gather-j2zv5/crc-debug-gwk66" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.552613 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0-host\") pod \"crc-debug-gwk66\" (UID: \"853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0\") " pod="openshift-must-gather-j2zv5/crc-debug-gwk66" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.552827 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljkkl\" (UniqueName: \"kubernetes.io/projected/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0-kube-api-access-ljkkl\") pod \"crc-debug-gwk66\" (UID: \"853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0\") " pod="openshift-must-gather-j2zv5/crc-debug-gwk66" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.572140 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljkkl\" (UniqueName: \"kubernetes.io/projected/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0-kube-api-access-ljkkl\") pod \"crc-debug-gwk66\" (UID: \"853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0\") " pod="openshift-must-gather-j2zv5/crc-debug-gwk66" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.621814 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca0c5bf8a19f70aaafb64f5b9f789c307fc647c358b25e580dabfb709fe8c910" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.621921 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/crc-debug-q9tj9" Dec 02 11:11:42 crc kubenswrapper[4711]: I1202 11:11:42.668234 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/crc-debug-gwk66" Dec 02 11:11:42 crc kubenswrapper[4711]: W1202 11:11:42.702982 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod853d1a8c_e4ad_4ecd_8d1c_548cc9b008d0.slice/crio-41b2aa7e93a4a7f1645663d88e8fcef5c79c840c7d08b8ca46ed5592c63d4c83 WatchSource:0}: Error finding container 41b2aa7e93a4a7f1645663d88e8fcef5c79c840c7d08b8ca46ed5592c63d4c83: Status 404 returned error can't find the container with id 41b2aa7e93a4a7f1645663d88e8fcef5c79c840c7d08b8ca46ed5592c63d4c83 Dec 02 11:11:43 crc kubenswrapper[4711]: I1202 11:11:43.098384 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4" path="/var/lib/kubelet/pods/95d9d868-de2e-4d1e-a901-f6a2fa4dc4a4/volumes" Dec 02 11:11:43 crc kubenswrapper[4711]: I1202 11:11:43.631297 4711 generic.go:334] "Generic (PLEG): container finished" podID="853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0" containerID="2fc04d08631701fd0921c93522a05b530704b3dfdf2a28983027c8d303ad5454" exitCode=0 Dec 02 11:11:43 crc kubenswrapper[4711]: I1202 11:11:43.631347 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2zv5/crc-debug-gwk66" event={"ID":"853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0","Type":"ContainerDied","Data":"2fc04d08631701fd0921c93522a05b530704b3dfdf2a28983027c8d303ad5454"} Dec 02 11:11:43 crc kubenswrapper[4711]: I1202 11:11:43.631378 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2zv5/crc-debug-gwk66" event={"ID":"853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0","Type":"ContainerStarted","Data":"41b2aa7e93a4a7f1645663d88e8fcef5c79c840c7d08b8ca46ed5592c63d4c83"} Dec 02 11:11:43 crc kubenswrapper[4711]: I1202 11:11:43.665443 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j2zv5/crc-debug-gwk66"] Dec 02 11:11:43 crc kubenswrapper[4711]: I1202 11:11:43.672332 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j2zv5/crc-debug-gwk66"] Dec 02 11:11:44 crc kubenswrapper[4711]: I1202 11:11:44.737887 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/crc-debug-gwk66" Dec 02 11:11:44 crc kubenswrapper[4711]: I1202 11:11:44.899699 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0-host\") pod \"853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0\" (UID: \"853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0\") " Dec 02 11:11:44 crc kubenswrapper[4711]: I1202 11:11:44.899830 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0-host" (OuterVolumeSpecName: "host") pod "853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0" (UID: "853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:11:44 crc kubenswrapper[4711]: I1202 11:11:44.899864 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljkkl\" (UniqueName: \"kubernetes.io/projected/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0-kube-api-access-ljkkl\") pod \"853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0\" (UID: \"853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0\") " Dec 02 11:11:44 crc kubenswrapper[4711]: I1202 11:11:44.900354 4711 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0-host\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:44 crc kubenswrapper[4711]: I1202 11:11:44.907848 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0-kube-api-access-ljkkl" (OuterVolumeSpecName: "kube-api-access-ljkkl") pod "853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0" (UID: "853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0"). InnerVolumeSpecName "kube-api-access-ljkkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:11:45 crc kubenswrapper[4711]: I1202 11:11:45.002291 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljkkl\" (UniqueName: \"kubernetes.io/projected/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0-kube-api-access-ljkkl\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:45 crc kubenswrapper[4711]: I1202 11:11:45.094606 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0" path="/var/lib/kubelet/pods/853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0/volumes" Dec 02 11:11:45 crc kubenswrapper[4711]: I1202 11:11:45.655168 4711 scope.go:117] "RemoveContainer" containerID="2fc04d08631701fd0921c93522a05b530704b3dfdf2a28983027c8d303ad5454" Dec 02 11:11:45 crc kubenswrapper[4711]: I1202 11:11:45.655196 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/crc-debug-gwk66" Dec 02 11:11:58 crc kubenswrapper[4711]: I1202 11:11:58.532692 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-fddf747c8-8wktl_429cc017-c93c-4d8a-b5eb-819eb6fde287/barbican-api/0.log" Dec 02 11:11:58 crc kubenswrapper[4711]: I1202 11:11:58.667056 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-fddf747c8-8wktl_429cc017-c93c-4d8a-b5eb-819eb6fde287/barbican-api-log/0.log" Dec 02 11:11:58 crc kubenswrapper[4711]: I1202 11:11:58.781502 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c5c78fc8b-7bz2t_2cacc030-0a08-4dab-96e4-a024aa16faa6/barbican-keystone-listener/0.log" Dec 02 11:11:58 crc kubenswrapper[4711]: I1202 11:11:58.874589 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c5c78fc8b-7bz2t_2cacc030-0a08-4dab-96e4-a024aa16faa6/barbican-keystone-listener-log/0.log" Dec 02 11:11:58 crc kubenswrapper[4711]: I1202 11:11:58.976185 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d4b47d497-gjzqt_c55fa7c4-9945-4651-bf4b-9ad1b94e6047/barbican-worker-log/0.log" Dec 02 11:11:58 crc kubenswrapper[4711]: I1202 11:11:58.984625 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d4b47d497-gjzqt_c55fa7c4-9945-4651-bf4b-9ad1b94e6047/barbican-worker/0.log" Dec 02 11:11:59 crc kubenswrapper[4711]: I1202 11:11:59.141975 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh_4a833e50-6d25-4593-b413-ceb01d516010/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:11:59 crc kubenswrapper[4711]: I1202 11:11:59.218076 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe8c836c-181d-4c74-8cfc-7e66357bed76/ceilometer-central-agent/0.log" Dec 02 11:11:59 crc kubenswrapper[4711]: I1202 11:11:59.318808 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe8c836c-181d-4c74-8cfc-7e66357bed76/ceilometer-notification-agent/0.log" Dec 02 11:11:59 crc kubenswrapper[4711]: I1202 11:11:59.345889 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe8c836c-181d-4c74-8cfc-7e66357bed76/proxy-httpd/0.log" Dec 02 11:11:59 crc kubenswrapper[4711]: I1202 11:11:59.408677 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe8c836c-181d-4c74-8cfc-7e66357bed76/sg-core/0.log" Dec 02 11:11:59 crc kubenswrapper[4711]: I1202 11:11:59.550942 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d1652776-ac6b-4033-a6b0-e0272ce72b34/cinder-api-log/0.log" Dec 02 11:11:59 crc kubenswrapper[4711]: I1202 11:11:59.552059 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d1652776-ac6b-4033-a6b0-e0272ce72b34/cinder-api/0.log" Dec 02 11:11:59 crc kubenswrapper[4711]: I1202 11:11:59.698171 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5a4b1357-91ed-4ef7-85f5-9b52085ce952/cinder-scheduler/0.log" Dec 02 11:11:59 crc kubenswrapper[4711]: I1202 11:11:59.797519 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5a4b1357-91ed-4ef7-85f5-9b52085ce952/probe/0.log" Dec 02 11:11:59 crc kubenswrapper[4711]: I1202 11:11:59.841274 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq_c9425d80-55ad-4f08-acd8-4389676e9b71/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:00 crc kubenswrapper[4711]: I1202 11:12:00.043920 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5_6cc0043c-689a-4c2f-b70f-a4a3c5344385/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:00 crc kubenswrapper[4711]: I1202 11:12:00.070761 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-59gqw_e33eabd6-6a5d-4d49-b0db-3d31fcb6f171/init/0.log" Dec 02 11:12:00 crc kubenswrapper[4711]: I1202 11:12:00.235679 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-59gqw_e33eabd6-6a5d-4d49-b0db-3d31fcb6f171/init/0.log" Dec 02 11:12:00 crc kubenswrapper[4711]: I1202 11:12:00.302170 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-59gqw_e33eabd6-6a5d-4d49-b0db-3d31fcb6f171/dnsmasq-dns/0.log" Dec 02 11:12:00 crc kubenswrapper[4711]: I1202 11:12:00.316915 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt_1a61e5f0-3651-4c39-aec6-5c6ae688a94c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:00 crc kubenswrapper[4711]: I1202 11:12:00.496151 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9e688916-64de-415a-86d9-b54a42d3174d/glance-log/0.log" Dec 02 11:12:00 crc kubenswrapper[4711]: I1202 11:12:00.505919 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9e688916-64de-415a-86d9-b54a42d3174d/glance-httpd/0.log" Dec 02 11:12:00 crc kubenswrapper[4711]: I1202 11:12:00.665033 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c9a0c110-a808-440d-ad76-4c1b193f3543/glance-httpd/0.log" Dec 02 11:12:00 crc kubenswrapper[4711]: I1202 11:12:00.721357 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c9a0c110-a808-440d-ad76-4c1b193f3543/glance-log/0.log" Dec 02 11:12:00 crc kubenswrapper[4711]: I1202 11:12:00.918369 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6b4d9565bd-5nwjn_a5e4731d-0cea-4530-aba2-86777a8db6cb/horizon/0.log" Dec 02 11:12:01 crc kubenswrapper[4711]: I1202 11:12:01.046669 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb_d8270f0b-6b4c-4682-bf69-09147b922785/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:01 crc kubenswrapper[4711]: I1202 11:12:01.214256 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-w4jpz_17177c8c-c071-4484-b8e6-2b3c49e8a3e4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:01 crc kubenswrapper[4711]: I1202 11:12:01.215484 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6b4d9565bd-5nwjn_a5e4731d-0cea-4530-aba2-86777a8db6cb/horizon-log/0.log" Dec 02 11:12:01 crc kubenswrapper[4711]: I1202 11:12:01.471675 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29411221-c27ql_37b4f06f-7175-4bee-85ee-970775ae49a8/keystone-cron/0.log" Dec 02 11:12:01 crc kubenswrapper[4711]: I1202 11:12:01.509870 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6986b467dd-l4plx_805add98-0168-44c8-a35c-dfdd1709a8ae/keystone-api/0.log" Dec 02 11:12:01 crc kubenswrapper[4711]: I1202 11:12:01.640201 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6abaf105-a517-42d9-86c4-5e6cd5527b94/kube-state-metrics/0.log" Dec 02 11:12:01 crc kubenswrapper[4711]: I1202 11:12:01.735783 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-czk8n_48489c70-bfb2-4dbf-b002-1dcdb3da737f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:02 crc kubenswrapper[4711]: I1202 11:12:02.252542 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b445d9db9-64xt2_886c7d1f-5204-436e-a656-68b1ac98b586/neutron-api/0.log" Dec 02 11:12:02 crc kubenswrapper[4711]: I1202 11:12:02.287108 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b445d9db9-64xt2_886c7d1f-5204-436e-a656-68b1ac98b586/neutron-httpd/0.log" Dec 02 11:12:02 crc kubenswrapper[4711]: I1202 11:12:02.447310 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc_18808e54-ca3d-47a8-ae93-d05737319878/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:02 crc kubenswrapper[4711]: I1202 11:12:02.927915 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d878b444-84f7-4c21-b377-91c45878b703/nova-cell0-conductor-conductor/0.log" Dec 02 11:12:02 crc kubenswrapper[4711]: I1202 11:12:02.952156 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d6a280ba-4feb-4ffd-8452-a4e7d2c6512b/nova-api-log/0.log" Dec 02 11:12:03 crc kubenswrapper[4711]: I1202 11:12:03.096455 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d6a280ba-4feb-4ffd-8452-a4e7d2c6512b/nova-api-api/0.log" Dec 02 11:12:03 crc kubenswrapper[4711]: I1202 11:12:03.235103 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2/nova-cell1-conductor-conductor/0.log" Dec 02 11:12:03 crc kubenswrapper[4711]: I1202 11:12:03.257774 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8336264b-6d1c-4a37-b329-743ef0e63e48/nova-cell1-novncproxy-novncproxy/0.log" Dec 02 11:12:03 crc kubenswrapper[4711]: I1202 11:12:03.407073 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-96fdh_45d45e5b-27e6-42bf-863d-e04caf847040/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:03 crc kubenswrapper[4711]: I1202 11:12:03.640761 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c513475d-590a-4821-9ee5-894e9faaef88/nova-metadata-log/0.log" Dec 02 11:12:03 crc kubenswrapper[4711]: I1202 11:12:03.841461 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1de720bf-9fe1-40cb-888c-1868fbc89f63/mysql-bootstrap/0.log" Dec 02 11:12:03 crc kubenswrapper[4711]: I1202 11:12:03.859019 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_426aabb1-9d66-4797-8fd2-3ecf4074192e/nova-scheduler-scheduler/0.log" Dec 02 11:12:04 crc kubenswrapper[4711]: I1202 11:12:04.095848 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1de720bf-9fe1-40cb-888c-1868fbc89f63/galera/0.log" Dec 02 11:12:04 crc kubenswrapper[4711]: I1202 11:12:04.109200 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1de720bf-9fe1-40cb-888c-1868fbc89f63/mysql-bootstrap/0.log" Dec 02 11:12:04 crc kubenswrapper[4711]: I1202 11:12:04.311465 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_12dcc0fa-368d-4a71-99ee-fe27e2cd410a/mysql-bootstrap/0.log" Dec 02 11:12:04 crc kubenswrapper[4711]: I1202 11:12:04.501040 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_12dcc0fa-368d-4a71-99ee-fe27e2cd410a/mysql-bootstrap/0.log" Dec 02 11:12:04 crc kubenswrapper[4711]: I1202 11:12:04.514464 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_12dcc0fa-368d-4a71-99ee-fe27e2cd410a/galera/0.log" Dec 02 11:12:04 crc kubenswrapper[4711]: I1202 11:12:04.705468 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_6da5f746-13e6-4933-8b49-ad17165cfcf0/openstackclient/0.log" Dec 02 11:12:04 crc kubenswrapper[4711]: I1202 11:12:04.720657 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c513475d-590a-4821-9ee5-894e9faaef88/nova-metadata-metadata/0.log" Dec 02 11:12:04 crc kubenswrapper[4711]: I1202 11:12:04.773971 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-7vpwc_4a2d3ff8-c766-478e-9fae-105cd7432c09/openstack-network-exporter/0.log" Dec 02 11:12:05 crc kubenswrapper[4711]: I1202 11:12:05.021448 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lxtbd_82b00f57-beb4-43ad-a1c5-cc9790bb167e/ovsdb-server-init/0.log" Dec 02 11:12:05 crc kubenswrapper[4711]: I1202 11:12:05.207283 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lxtbd_82b00f57-beb4-43ad-a1c5-cc9790bb167e/ovs-vswitchd/0.log" Dec 02 11:12:05 crc kubenswrapper[4711]: I1202 11:12:05.215238 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lxtbd_82b00f57-beb4-43ad-a1c5-cc9790bb167e/ovsdb-server/0.log" Dec 02 11:12:05 crc kubenswrapper[4711]: I1202 11:12:05.258272 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lxtbd_82b00f57-beb4-43ad-a1c5-cc9790bb167e/ovsdb-server-init/0.log" Dec 02 11:12:05 crc kubenswrapper[4711]: I1202 11:12:05.422864 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-q57lb_7ce53b33-b78a-446d-b345-c8d918209ddf/ovn-controller/0.log" Dec 02 11:12:05 crc kubenswrapper[4711]: I1202 11:12:05.473324 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9hrn7_e572720a-5f65-485f-ad5b-76d5f7a782ac/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:05 crc kubenswrapper[4711]: I1202 11:12:05.605302 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_19183ef0-1a98-4d60-96c3-2b15fd8bd2e8/openstack-network-exporter/0.log" Dec 02 11:12:05 crc kubenswrapper[4711]: I1202 11:12:05.707863 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_19183ef0-1a98-4d60-96c3-2b15fd8bd2e8/ovn-northd/0.log" Dec 02 11:12:05 crc kubenswrapper[4711]: I1202 11:12:05.758698 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7fff7494-ee8a-4c45-87de-00444f64be54/openstack-network-exporter/0.log" Dec 02 11:12:05 crc kubenswrapper[4711]: I1202 11:12:05.816302 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7fff7494-ee8a-4c45-87de-00444f64be54/ovsdbserver-nb/0.log" Dec 02 11:12:05 crc kubenswrapper[4711]: I1202 11:12:05.979079 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_99032f62-533c-4fa2-887c-41a25a505906/openstack-network-exporter/0.log" Dec 02 11:12:06 crc kubenswrapper[4711]: I1202 11:12:06.039138 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_99032f62-533c-4fa2-887c-41a25a505906/ovsdbserver-sb/0.log" Dec 02 11:12:06 crc kubenswrapper[4711]: I1202 11:12:06.296735 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6545f6547b-92nrg_34890e20-861e-4023-8029-aff08285be51/placement-log/0.log" Dec 02 11:12:06 crc kubenswrapper[4711]: I1202 11:12:06.310056 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6545f6547b-92nrg_34890e20-861e-4023-8029-aff08285be51/placement-api/0.log" Dec 02 11:12:06 crc kubenswrapper[4711]: I1202 11:12:06.369377 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b642dce9-6793-46ab-9d8a-061c21e965ce/setup-container/0.log" Dec 02 11:12:06 crc kubenswrapper[4711]: I1202 11:12:06.509408 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b642dce9-6793-46ab-9d8a-061c21e965ce/setup-container/0.log" Dec 02 11:12:06 crc kubenswrapper[4711]: I1202 11:12:06.521389 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b642dce9-6793-46ab-9d8a-061c21e965ce/rabbitmq/0.log" Dec 02 11:12:06 crc kubenswrapper[4711]: I1202 11:12:06.607181 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9b7fae8d-6b42-4c76-b0a0-74004c2e5e47/setup-container/0.log" Dec 02 11:12:06 crc kubenswrapper[4711]: I1202 11:12:06.842203 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9b7fae8d-6b42-4c76-b0a0-74004c2e5e47/rabbitmq/0.log" Dec 02 11:12:06 crc kubenswrapper[4711]: I1202 11:12:06.862788 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9b7fae8d-6b42-4c76-b0a0-74004c2e5e47/setup-container/0.log" Dec 02 11:12:06 crc kubenswrapper[4711]: I1202 11:12:06.885497 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz_5ff31470-e780-4e6a-850a-6cada5050225/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:07 crc kubenswrapper[4711]: I1202 11:12:07.008114 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-d6hwm_c9824b88-0553-466a-9c0d-07ab1949543a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:07 crc kubenswrapper[4711]: I1202 11:12:07.102166 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj_2712309c-6014-4332-86b8-d42b5021b6c0/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:07 crc kubenswrapper[4711]: I1202 11:12:07.326670 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-frrkh_ec850345-39cb-45c3-881d-aa6f59cf2c7a/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:07 crc kubenswrapper[4711]: I1202 11:12:07.347849 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-t4djr_4a03a125-6a0b-4e81-8df8-48e0085fa9a1/ssh-known-hosts-edpm-deployment/0.log" Dec 02 11:12:07 crc kubenswrapper[4711]: I1202 11:12:07.585332 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8455cffcc7-gvzs8_4bb0ebbe-23dd-4970-bc78-799616ef2e21/proxy-server/0.log" Dec 02 11:12:07 crc kubenswrapper[4711]: I1202 11:12:07.639479 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8455cffcc7-gvzs8_4bb0ebbe-23dd-4970-bc78-799616ef2e21/proxy-httpd/0.log" Dec 02 11:12:07 crc kubenswrapper[4711]: I1202 11:12:07.804468 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-58jtp_848cb525-39ab-47d7-99fc-9fbc249e740a/swift-ring-rebalance/0.log" Dec 02 11:12:07 crc kubenswrapper[4711]: I1202 11:12:07.837788 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/account-auditor/0.log" Dec 02 11:12:07 crc kubenswrapper[4711]: I1202 11:12:07.878254 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/account-reaper/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.030640 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/account-replicator/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.072355 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/account-server/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.078107 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/container-auditor/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.160908 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/container-replicator/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.228671 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/container-server/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.268725 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/object-auditor/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.355874 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/container-updater/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.374016 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/object-expirer/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.428728 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/object-replicator/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.499047 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/object-server/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.572855 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/rsync/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.576865 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/object-updater/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.632108 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/swift-recon-cron/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.798229 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5_5192ee19-472c-4f7c-b41d-4a11b518b045/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:08 crc kubenswrapper[4711]: I1202 11:12:08.858588 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_725581bd-6264-4ca6-b1fa-126c3c50800b/tempest-tests-tempest-tests-runner/0.log" Dec 02 11:12:09 crc kubenswrapper[4711]: I1202 11:12:09.025244 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_76f6fd71-c403-415b-9402-f3fcd9ab0fd4/test-operator-logs-container/0.log" Dec 02 11:12:09 crc kubenswrapper[4711]: I1202 11:12:09.068869 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-55wpk_c44d97e0-717c-4337-910f-68b93cc653a7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:12:16 crc kubenswrapper[4711]: I1202 11:12:16.717305 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f2ad898b-6abc-49b9-8f12-5e2da28b6479/memcached/0.log" Dec 02 11:12:32 crc kubenswrapper[4711]: I1202 11:12:32.473824 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/util/0.log" Dec 02 11:12:32 crc kubenswrapper[4711]: I1202 11:12:32.689691 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/util/0.log" Dec 02 11:12:32 crc kubenswrapper[4711]: I1202 11:12:32.694503 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/pull/0.log" Dec 02 11:12:32 crc kubenswrapper[4711]: I1202 11:12:32.694764 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/pull/0.log" Dec 02 11:12:32 crc kubenswrapper[4711]: I1202 11:12:32.919734 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/util/0.log" Dec 02 11:12:32 crc kubenswrapper[4711]: I1202 11:12:32.951890 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/pull/0.log" Dec 02 11:12:32 crc kubenswrapper[4711]: I1202 11:12:32.970655 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/extract/0.log" Dec 02 11:12:33 crc kubenswrapper[4711]: I1202 11:12:33.082806 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-zsv2n_10c23c28-0e51-465d-ba7c-1becd6a7b5ee/kube-rbac-proxy/0.log" Dec 02 11:12:33 crc kubenswrapper[4711]: I1202 11:12:33.162584 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-n9x57_d5039117-0162-4158-b6f7-a3dedff319fb/kube-rbac-proxy/0.log" Dec 02 11:12:33 crc kubenswrapper[4711]: I1202 11:12:33.193476 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-zsv2n_10c23c28-0e51-465d-ba7c-1becd6a7b5ee/manager/0.log" Dec 02 11:12:33 crc kubenswrapper[4711]: I1202 11:12:33.301585 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-n9x57_d5039117-0162-4158-b6f7-a3dedff319fb/manager/0.log" Dec 02 11:12:33 crc kubenswrapper[4711]: I1202 11:12:33.405114 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-ktj75_26eb7b16-7210-459f-baac-e740acdb363e/kube-rbac-proxy/0.log" Dec 02 11:12:33 crc kubenswrapper[4711]: I1202 11:12:33.417901 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-ktj75_26eb7b16-7210-459f-baac-e740acdb363e/manager/0.log" Dec 02 11:12:33 crc kubenswrapper[4711]: I1202 11:12:33.569199 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-pbvd2_a1984464-0dac-491f-a2f7-bc1f9214fef8/kube-rbac-proxy/0.log" Dec 02 11:12:33 crc kubenswrapper[4711]: I1202 11:12:33.771656 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v98q2_8284f010-fa2e-45fd-aa0f-46958a91102b/manager/0.log" Dec 02 11:12:33 crc kubenswrapper[4711]: I1202 11:12:33.878275 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v98q2_8284f010-fa2e-45fd-aa0f-46958a91102b/kube-rbac-proxy/0.log" Dec 02 11:12:33 crc kubenswrapper[4711]: I1202 11:12:33.945377 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-pbvd2_a1984464-0dac-491f-a2f7-bc1f9214fef8/manager/0.log" Dec 02 11:12:34 crc kubenswrapper[4711]: I1202 11:12:34.052038 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-bqhpj_59853ec3-31ef-402d-8f5f-c12528b688f0/kube-rbac-proxy/0.log" Dec 02 11:12:34 crc kubenswrapper[4711]: I1202 11:12:34.126821 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-tnkm7_90b53574-c0b1-4bc6-ba22-238abb3c5b32/kube-rbac-proxy/0.log" Dec 02 11:12:34 crc kubenswrapper[4711]: I1202 11:12:34.137464 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-bqhpj_59853ec3-31ef-402d-8f5f-c12528b688f0/manager/0.log" Dec 02 11:12:34 crc kubenswrapper[4711]: I1202 11:12:34.361754 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-cwr6k_102348ad-5257-4114-acd6-e0e6c60a3c2b/manager/0.log" Dec 02 11:12:34 crc kubenswrapper[4711]: I1202 11:12:34.366768 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-cwr6k_102348ad-5257-4114-acd6-e0e6c60a3c2b/kube-rbac-proxy/0.log" Dec 02 11:12:34 crc kubenswrapper[4711]: I1202 11:12:34.387581 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-tnkm7_90b53574-c0b1-4bc6-ba22-238abb3c5b32/manager/0.log" Dec 02 11:12:34 crc kubenswrapper[4711]: I1202 11:12:34.532779 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-xkq8d_0b12ad88-acba-4d9f-82ac-f59d3ca57ac8/kube-rbac-proxy/0.log" Dec 02 11:12:34 crc kubenswrapper[4711]: I1202 11:12:34.616730 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-xkq8d_0b12ad88-acba-4d9f-82ac-f59d3ca57ac8/manager/0.log" Dec 02 11:12:34 crc kubenswrapper[4711]: I1202 11:12:34.764476 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-5nkxr_9d8cab18-532c-45c8-ba21-6f3bee02c722/manager/0.log" Dec 02 11:12:34 crc kubenswrapper[4711]: I1202 11:12:34.776900 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-5nkxr_9d8cab18-532c-45c8-ba21-6f3bee02c722/kube-rbac-proxy/0.log" Dec 02 11:12:34 crc kubenswrapper[4711]: I1202 11:12:34.844211 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-lsgwm_7e3c4c79-5009-40f8-80f9-0d30bf57cc5a/kube-rbac-proxy/0.log" Dec 02 11:12:35 crc kubenswrapper[4711]: I1202 11:12:35.019337 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-lsgwm_7e3c4c79-5009-40f8-80f9-0d30bf57cc5a/manager/0.log" Dec 02 11:12:35 crc kubenswrapper[4711]: I1202 11:12:35.039350 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-99nd2_1f5bd4c4-1262-47a2-94fb-bce66ebe7929/kube-rbac-proxy/0.log" Dec 02 11:12:35 crc kubenswrapper[4711]: I1202 11:12:35.076540 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-99nd2_1f5bd4c4-1262-47a2-94fb-bce66ebe7929/manager/0.log" Dec 02 11:12:35 crc kubenswrapper[4711]: I1202 11:12:35.332577 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-wdt5k_68951454-b246-49ab-b604-a62c48e0b2ea/kube-rbac-proxy/0.log" Dec 02 11:12:35 crc kubenswrapper[4711]: I1202 11:12:35.358484 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-wdt5k_68951454-b246-49ab-b604-a62c48e0b2ea/manager/0.log" Dec 02 11:12:35 crc kubenswrapper[4711]: I1202 11:12:35.448904 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-24pj6_f4109dad-388a-493d-b026-6cd10b9f76dd/kube-rbac-proxy/0.log" Dec 02 11:12:35 crc kubenswrapper[4711]: I1202 11:12:35.541598 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-24pj6_f4109dad-388a-493d-b026-6cd10b9f76dd/manager/0.log" Dec 02 11:12:35 crc kubenswrapper[4711]: I1202 11:12:35.592965 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs_6ef2b37f-78be-4a19-9d1b-b7d982032aab/kube-rbac-proxy/0.log" Dec 02 11:12:35 crc kubenswrapper[4711]: I1202 11:12:35.624889 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs_6ef2b37f-78be-4a19-9d1b-b7d982032aab/manager/0.log" Dec 02 11:12:36 crc kubenswrapper[4711]: I1202 11:12:36.052587 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7dwdr_1d175158-9d08-4b61-87b8-c9054e78d6aa/registry-server/0.log" Dec 02 11:12:36 crc kubenswrapper[4711]: I1202 11:12:36.060444 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-d77745b8c-lf4vw_f105bf88-39ba-4e14-8741-1c3a0d759f63/operator/0.log" Dec 02 11:12:36 crc kubenswrapper[4711]: I1202 11:12:36.308473 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-l7s28_0872909b-ee36-482c-a6d7-f6d7ee6cc5ff/kube-rbac-proxy/0.log" Dec 02 11:12:36 crc kubenswrapper[4711]: I1202 11:12:36.323563 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-l7s28_0872909b-ee36-482c-a6d7-f6d7ee6cc5ff/manager/0.log" Dec 02 11:12:36 crc kubenswrapper[4711]: I1202 11:12:36.384769 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-mhr7r_03d9d400-b25b-4ac4-bad3-55afbae399e4/kube-rbac-proxy/0.log" Dec 02 11:12:36 crc kubenswrapper[4711]: I1202 11:12:36.446259 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-mhr7r_03d9d400-b25b-4ac4-bad3-55afbae399e4/manager/0.log" Dec 02 11:12:36 crc kubenswrapper[4711]: I1202 11:12:36.609461 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-k8d2c_2c9ae2aa-9390-409b-b50f-61295577580a/operator/0.log" Dec 02 11:12:36 crc kubenswrapper[4711]: I1202 11:12:36.657131 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-xdfmz_7f7481f9-19ef-4b29-95ef-043c7306f5cc/kube-rbac-proxy/0.log" Dec 02 11:12:36 crc kubenswrapper[4711]: I1202 11:12:36.810054 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-xdfmz_7f7481f9-19ef-4b29-95ef-043c7306f5cc/manager/0.log" Dec 02 11:12:36 crc kubenswrapper[4711]: I1202 11:12:36.857901 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-999cf8558-p99s8_4551cf35-cc78-43c0-a468-2e6518e336ff/kube-rbac-proxy/0.log" Dec 02 11:12:36 crc kubenswrapper[4711]: I1202 11:12:36.941212 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8f7469895-dzfgg_15eaa14e-a3cd-4e68-8531-741ae62b9d58/manager/0.log" Dec 02 11:12:36 crc kubenswrapper[4711]: I1202 11:12:36.949511 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-999cf8558-p99s8_4551cf35-cc78-43c0-a468-2e6518e336ff/manager/0.log" Dec 02 11:12:37 crc kubenswrapper[4711]: I1202 11:12:37.035042 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-fj96f_01685621-3c95-4091-a03a-de8d25c67efd/kube-rbac-proxy/0.log" Dec 02 11:12:37 crc kubenswrapper[4711]: I1202 11:12:37.059876 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-fj96f_01685621-3c95-4091-a03a-de8d25c67efd/manager/0.log" Dec 02 11:12:37 crc kubenswrapper[4711]: I1202 11:12:37.163117 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-7hlb8_13bbf4f3-8a73-45b8-80f5-52907db710c0/kube-rbac-proxy/0.log" Dec 02 11:12:37 crc kubenswrapper[4711]: I1202 11:12:37.206708 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-7hlb8_13bbf4f3-8a73-45b8-80f5-52907db710c0/manager/0.log" Dec 02 11:12:56 crc kubenswrapper[4711]: I1202 11:12:56.012279 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7kmjp_f6b3df69-76ea-4424-8d50-b2646cf2cd0e/control-plane-machine-set-operator/0.log" Dec 02 11:12:56 crc kubenswrapper[4711]: I1202 11:12:56.122206 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5n6jf_4d5f3363-25e9-4f5b-94ed-843a17d17997/kube-rbac-proxy/0.log" Dec 02 11:12:56 crc kubenswrapper[4711]: I1202 11:12:56.183998 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5n6jf_4d5f3363-25e9-4f5b-94ed-843a17d17997/machine-api-operator/0.log" Dec 02 11:13:08 crc kubenswrapper[4711]: I1202 11:13:08.736136 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-rvx4g_1deeebfb-423b-4a73-a76a-da43ae5dd8a9/cert-manager-controller/0.log" Dec 02 11:13:08 crc kubenswrapper[4711]: I1202 11:13:08.891228 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-mht85_9f001212-3824-41ea-a836-63c46277f629/cert-manager-cainjector/0.log" Dec 02 11:13:08 crc kubenswrapper[4711]: I1202 11:13:08.974684 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-h2pxz_beca3d6d-1017-4654-a1e1-6539558badf4/cert-manager-webhook/0.log" Dec 02 11:13:21 crc kubenswrapper[4711]: I1202 11:13:21.614365 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-b6vc9_4f84ccb3-491d-4453-aaab-89e33441a3e5/nmstate-console-plugin/0.log" Dec 02 11:13:21 crc kubenswrapper[4711]: I1202 11:13:21.762610 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-6ndb5_e515d3a0-2428-4629-833b-f23af0d11b10/nmstate-handler/0.log" Dec 02 11:13:21 crc kubenswrapper[4711]: I1202 11:13:21.810820 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ddk76_2fa89fd1-9149-414e-8214-c1bbb1563330/nmstate-metrics/0.log" Dec 02 11:13:21 crc kubenswrapper[4711]: I1202 11:13:21.812333 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ddk76_2fa89fd1-9149-414e-8214-c1bbb1563330/kube-rbac-proxy/0.log" Dec 02 11:13:21 crc kubenswrapper[4711]: I1202 11:13:21.969569 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-t599q_edf95574-1178-4d62-b5b2-7dd68fce39da/nmstate-operator/0.log" Dec 02 11:13:22 crc kubenswrapper[4711]: I1202 11:13:22.002507 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-qz4ft_5b290675-5c06-445c-a50b-34ac2ba80718/nmstate-webhook/0.log" Dec 02 11:13:22 crc kubenswrapper[4711]: I1202 11:13:22.585632 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:13:22 crc kubenswrapper[4711]: I1202 11:13:22.586105 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.712789 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pgccx"] Dec 02 11:13:30 crc kubenswrapper[4711]: E1202 11:13:30.713781 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0" containerName="container-00" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.713804 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0" containerName="container-00" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.714145 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="853d1a8c-e4ad-4ecd-8d1c-548cc9b008d0" containerName="container-00" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.715566 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.738303 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pgccx"] Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.854384 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlnqz\" (UniqueName: \"kubernetes.io/projected/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-kube-api-access-hlnqz\") pod \"community-operators-pgccx\" (UID: \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\") " pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.854749 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-utilities\") pod \"community-operators-pgccx\" (UID: \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\") " pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.854777 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-catalog-content\") pod \"community-operators-pgccx\" (UID: \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\") " pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.956750 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-utilities\") pod \"community-operators-pgccx\" (UID: \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\") " pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.956825 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-catalog-content\") pod \"community-operators-pgccx\" (UID: \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\") " pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.956990 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlnqz\" (UniqueName: \"kubernetes.io/projected/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-kube-api-access-hlnqz\") pod \"community-operators-pgccx\" (UID: \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\") " pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.957348 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-utilities\") pod \"community-operators-pgccx\" (UID: \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\") " pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.957520 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-catalog-content\") pod \"community-operators-pgccx\" (UID: \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\") " pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:30 crc kubenswrapper[4711]: I1202 11:13:30.990511 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlnqz\" (UniqueName: \"kubernetes.io/projected/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-kube-api-access-hlnqz\") pod \"community-operators-pgccx\" (UID: \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\") " pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:31 crc kubenswrapper[4711]: I1202 11:13:31.162234 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:31 crc kubenswrapper[4711]: I1202 11:13:31.651869 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pgccx"] Dec 02 11:13:31 crc kubenswrapper[4711]: I1202 11:13:31.675825 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgccx" event={"ID":"dd253910-a1e6-4ff4-8a08-2e64e89f64aa","Type":"ContainerStarted","Data":"eb9e4b4b1a36fd1eb84220acf71077513ec8952bafe02739dcceab2a81cdde7d"} Dec 02 11:13:32 crc kubenswrapper[4711]: I1202 11:13:32.689487 4711 generic.go:334] "Generic (PLEG): container finished" podID="dd253910-a1e6-4ff4-8a08-2e64e89f64aa" containerID="49ccf6f6f33a2b6f018324976a88a409b7d725ed125e530c09cce61bdb8d2a63" exitCode=0 Dec 02 11:13:32 crc kubenswrapper[4711]: I1202 11:13:32.690973 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgccx" event={"ID":"dd253910-a1e6-4ff4-8a08-2e64e89f64aa","Type":"ContainerDied","Data":"49ccf6f6f33a2b6f018324976a88a409b7d725ed125e530c09cce61bdb8d2a63"} Dec 02 11:13:33 crc kubenswrapper[4711]: I1202 11:13:33.700295 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgccx" event={"ID":"dd253910-a1e6-4ff4-8a08-2e64e89f64aa","Type":"ContainerStarted","Data":"6629db07e45d242436390b15dba46c62fe2e685ef30099f1730dbd99db5e7b2a"} Dec 02 11:13:34 crc kubenswrapper[4711]: I1202 11:13:34.712091 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgccx" event={"ID":"dd253910-a1e6-4ff4-8a08-2e64e89f64aa","Type":"ContainerDied","Data":"6629db07e45d242436390b15dba46c62fe2e685ef30099f1730dbd99db5e7b2a"} Dec 02 11:13:34 crc kubenswrapper[4711]: I1202 11:13:34.711987 4711 generic.go:334] "Generic (PLEG): container finished" podID="dd253910-a1e6-4ff4-8a08-2e64e89f64aa" containerID="6629db07e45d242436390b15dba46c62fe2e685ef30099f1730dbd99db5e7b2a" exitCode=0 Dec 02 11:13:36 crc kubenswrapper[4711]: I1202 11:13:36.730654 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgccx" event={"ID":"dd253910-a1e6-4ff4-8a08-2e64e89f64aa","Type":"ContainerStarted","Data":"1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b"} Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.116656 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-z96st_50c9eab8-843d-46f0-8af8-85bedeb5c0e9/kube-rbac-proxy/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.223571 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-z96st_50c9eab8-843d-46f0-8af8-85bedeb5c0e9/controller/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.319775 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-frr-files/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.481098 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-frr-files/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.529921 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-metrics/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.546839 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-reloader/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.564948 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-reloader/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.731667 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-metrics/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.743820 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-metrics/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.746527 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-frr-files/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.769255 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-reloader/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.951324 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-reloader/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.976798 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-metrics/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.980335 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/controller/0.log" Dec 02 11:13:37 crc kubenswrapper[4711]: I1202 11:13:37.985485 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-frr-files/0.log" Dec 02 11:13:38 crc kubenswrapper[4711]: I1202 11:13:38.145640 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/kube-rbac-proxy/0.log" Dec 02 11:13:38 crc kubenswrapper[4711]: I1202 11:13:38.174817 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/frr-metrics/0.log" Dec 02 11:13:38 crc kubenswrapper[4711]: I1202 11:13:38.191674 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/kube-rbac-proxy-frr/0.log" Dec 02 11:13:38 crc kubenswrapper[4711]: I1202 11:13:38.372739 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/reloader/0.log" Dec 02 11:13:38 crc kubenswrapper[4711]: I1202 11:13:38.429785 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-t2846_08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0/frr-k8s-webhook-server/0.log" Dec 02 11:13:38 crc kubenswrapper[4711]: I1202 11:13:38.597208 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5c8d4f74b4-pmd42_b61b9ed6-c590-43f9-b029-82f457a65986/manager/0.log" Dec 02 11:13:38 crc kubenswrapper[4711]: I1202 11:13:38.781612 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-575f6f4cd9-2pkhc_02ce4661-516f-4d17-b5b8-69958d4c4ee8/webhook-server/0.log" Dec 02 11:13:38 crc kubenswrapper[4711]: I1202 11:13:38.926901 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fjkgx_d6e279b9-33a8-48d9-9442-be75926b530c/kube-rbac-proxy/0.log" Dec 02 11:13:39 crc kubenswrapper[4711]: I1202 11:13:39.363422 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fjkgx_d6e279b9-33a8-48d9-9442-be75926b530c/speaker/0.log" Dec 02 11:13:39 crc kubenswrapper[4711]: I1202 11:13:39.436987 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/frr/0.log" Dec 02 11:13:41 crc kubenswrapper[4711]: I1202 11:13:41.163520 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:41 crc kubenswrapper[4711]: I1202 11:13:41.165599 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:41 crc kubenswrapper[4711]: I1202 11:13:41.229523 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:41 crc kubenswrapper[4711]: I1202 11:13:41.270359 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pgccx" podStartSLOduration=8.38857373 podStartE2EDuration="11.270276343s" podCreationTimestamp="2025-12-02 11:13:30 +0000 UTC" firstStartedPulling="2025-12-02 11:13:32.691432586 +0000 UTC m=+3602.400799043" lastFinishedPulling="2025-12-02 11:13:35.573135169 +0000 UTC m=+3605.282501656" observedRunningTime="2025-12-02 11:13:36.752302282 +0000 UTC m=+3606.461668729" watchObservedRunningTime="2025-12-02 11:13:41.270276343 +0000 UTC m=+3610.979642830" Dec 02 11:13:41 crc kubenswrapper[4711]: I1202 11:13:41.850859 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:41 crc kubenswrapper[4711]: I1202 11:13:41.910998 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pgccx"] Dec 02 11:13:43 crc kubenswrapper[4711]: I1202 11:13:43.820325 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pgccx" podUID="dd253910-a1e6-4ff4-8a08-2e64e89f64aa" containerName="registry-server" containerID="cri-o://1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b" gracePeriod=2 Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.333944 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.429939 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlnqz\" (UniqueName: \"kubernetes.io/projected/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-kube-api-access-hlnqz\") pod \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\" (UID: \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\") " Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.430093 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-catalog-content\") pod \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\" (UID: \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\") " Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.430130 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-utilities\") pod \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\" (UID: \"dd253910-a1e6-4ff4-8a08-2e64e89f64aa\") " Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.431188 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-utilities" (OuterVolumeSpecName: "utilities") pod "dd253910-a1e6-4ff4-8a08-2e64e89f64aa" (UID: "dd253910-a1e6-4ff4-8a08-2e64e89f64aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.435661 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-kube-api-access-hlnqz" (OuterVolumeSpecName: "kube-api-access-hlnqz") pod "dd253910-a1e6-4ff4-8a08-2e64e89f64aa" (UID: "dd253910-a1e6-4ff4-8a08-2e64e89f64aa"). InnerVolumeSpecName "kube-api-access-hlnqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.486101 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd253910-a1e6-4ff4-8a08-2e64e89f64aa" (UID: "dd253910-a1e6-4ff4-8a08-2e64e89f64aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.530937 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.531157 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.531219 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlnqz\" (UniqueName: \"kubernetes.io/projected/dd253910-a1e6-4ff4-8a08-2e64e89f64aa-kube-api-access-hlnqz\") on node \"crc\" DevicePath \"\"" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.834303 4711 generic.go:334] "Generic (PLEG): container finished" podID="dd253910-a1e6-4ff4-8a08-2e64e89f64aa" containerID="1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b" exitCode=0 Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.834355 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgccx" event={"ID":"dd253910-a1e6-4ff4-8a08-2e64e89f64aa","Type":"ContainerDied","Data":"1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b"} Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.834387 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgccx" event={"ID":"dd253910-a1e6-4ff4-8a08-2e64e89f64aa","Type":"ContainerDied","Data":"eb9e4b4b1a36fd1eb84220acf71077513ec8952bafe02739dcceab2a81cdde7d"} Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.834412 4711 scope.go:117] "RemoveContainer" containerID="1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.834553 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgccx" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.860395 4711 scope.go:117] "RemoveContainer" containerID="6629db07e45d242436390b15dba46c62fe2e685ef30099f1730dbd99db5e7b2a" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.903145 4711 scope.go:117] "RemoveContainer" containerID="49ccf6f6f33a2b6f018324976a88a409b7d725ed125e530c09cce61bdb8d2a63" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.903216 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pgccx"] Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.914314 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pgccx"] Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.955476 4711 scope.go:117] "RemoveContainer" containerID="1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b" Dec 02 11:13:44 crc kubenswrapper[4711]: E1202 11:13:44.956179 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b\": container with ID starting with 1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b not found: ID does not exist" containerID="1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.956256 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b"} err="failed to get container status \"1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b\": rpc error: code = NotFound desc = could not find container \"1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b\": container with ID starting with 1dc41fc54f1290955741b600034b5d0bc85555c9fba0c176ce60e7cb8acd785b not found: ID does not exist" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.956304 4711 scope.go:117] "RemoveContainer" containerID="6629db07e45d242436390b15dba46c62fe2e685ef30099f1730dbd99db5e7b2a" Dec 02 11:13:44 crc kubenswrapper[4711]: E1202 11:13:44.956815 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6629db07e45d242436390b15dba46c62fe2e685ef30099f1730dbd99db5e7b2a\": container with ID starting with 6629db07e45d242436390b15dba46c62fe2e685ef30099f1730dbd99db5e7b2a not found: ID does not exist" containerID="6629db07e45d242436390b15dba46c62fe2e685ef30099f1730dbd99db5e7b2a" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.956849 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6629db07e45d242436390b15dba46c62fe2e685ef30099f1730dbd99db5e7b2a"} err="failed to get container status \"6629db07e45d242436390b15dba46c62fe2e685ef30099f1730dbd99db5e7b2a\": rpc error: code = NotFound desc = could not find container \"6629db07e45d242436390b15dba46c62fe2e685ef30099f1730dbd99db5e7b2a\": container with ID starting with 6629db07e45d242436390b15dba46c62fe2e685ef30099f1730dbd99db5e7b2a not found: ID does not exist" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.956876 4711 scope.go:117] "RemoveContainer" containerID="49ccf6f6f33a2b6f018324976a88a409b7d725ed125e530c09cce61bdb8d2a63" Dec 02 11:13:44 crc kubenswrapper[4711]: E1202 11:13:44.957285 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49ccf6f6f33a2b6f018324976a88a409b7d725ed125e530c09cce61bdb8d2a63\": container with ID starting with 49ccf6f6f33a2b6f018324976a88a409b7d725ed125e530c09cce61bdb8d2a63 not found: ID does not exist" containerID="49ccf6f6f33a2b6f018324976a88a409b7d725ed125e530c09cce61bdb8d2a63" Dec 02 11:13:44 crc kubenswrapper[4711]: I1202 11:13:44.957304 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49ccf6f6f33a2b6f018324976a88a409b7d725ed125e530c09cce61bdb8d2a63"} err="failed to get container status \"49ccf6f6f33a2b6f018324976a88a409b7d725ed125e530c09cce61bdb8d2a63\": rpc error: code = NotFound desc = could not find container \"49ccf6f6f33a2b6f018324976a88a409b7d725ed125e530c09cce61bdb8d2a63\": container with ID starting with 49ccf6f6f33a2b6f018324976a88a409b7d725ed125e530c09cce61bdb8d2a63 not found: ID does not exist" Dec 02 11:13:45 crc kubenswrapper[4711]: I1202 11:13:45.088526 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd253910-a1e6-4ff4-8a08-2e64e89f64aa" path="/var/lib/kubelet/pods/dd253910-a1e6-4ff4-8a08-2e64e89f64aa/volumes" Dec 02 11:13:52 crc kubenswrapper[4711]: I1202 11:13:52.586896 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:13:52 crc kubenswrapper[4711]: I1202 11:13:52.587929 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:13:53 crc kubenswrapper[4711]: I1202 11:13:53.639766 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/util/0.log" Dec 02 11:13:53 crc kubenswrapper[4711]: I1202 11:13:53.839793 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/pull/0.log" Dec 02 11:13:53 crc kubenswrapper[4711]: I1202 11:13:53.854553 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/util/0.log" Dec 02 11:13:53 crc kubenswrapper[4711]: I1202 11:13:53.857347 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/pull/0.log" Dec 02 11:13:54 crc kubenswrapper[4711]: I1202 11:13:54.059370 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/pull/0.log" Dec 02 11:13:54 crc kubenswrapper[4711]: I1202 11:13:54.082788 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/util/0.log" Dec 02 11:13:54 crc kubenswrapper[4711]: I1202 11:13:54.103683 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/extract/0.log" Dec 02 11:13:54 crc kubenswrapper[4711]: I1202 11:13:54.386538 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/util/0.log" Dec 02 11:13:54 crc kubenswrapper[4711]: I1202 11:13:54.626068 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/util/0.log" Dec 02 11:13:54 crc kubenswrapper[4711]: I1202 11:13:54.684043 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/pull/0.log" Dec 02 11:13:54 crc kubenswrapper[4711]: I1202 11:13:54.692776 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/pull/0.log" Dec 02 11:13:54 crc kubenswrapper[4711]: I1202 11:13:54.781266 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/util/0.log" Dec 02 11:13:54 crc kubenswrapper[4711]: I1202 11:13:54.872371 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/pull/0.log" Dec 02 11:13:54 crc kubenswrapper[4711]: I1202 11:13:54.872558 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/extract/0.log" Dec 02 11:13:54 crc kubenswrapper[4711]: I1202 11:13:54.989096 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/extract-utilities/0.log" Dec 02 11:13:55 crc kubenswrapper[4711]: I1202 11:13:55.148450 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/extract-content/0.log" Dec 02 11:13:55 crc kubenswrapper[4711]: I1202 11:13:55.156644 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/extract-content/0.log" Dec 02 11:13:55 crc kubenswrapper[4711]: I1202 11:13:55.174040 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/extract-utilities/0.log" Dec 02 11:13:55 crc kubenswrapper[4711]: I1202 11:13:55.376728 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/extract-content/0.log" Dec 02 11:13:55 crc kubenswrapper[4711]: I1202 11:13:55.385398 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/extract-utilities/0.log" Dec 02 11:13:55 crc kubenswrapper[4711]: I1202 11:13:55.581509 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/extract-utilities/0.log" Dec 02 11:13:55 crc kubenswrapper[4711]: I1202 11:13:55.841374 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/extract-utilities/0.log" Dec 02 11:13:55 crc kubenswrapper[4711]: I1202 11:13:55.902730 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/registry-server/0.log" Dec 02 11:13:55 crc kubenswrapper[4711]: I1202 11:13:55.909321 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/extract-content/0.log" Dec 02 11:13:55 crc kubenswrapper[4711]: I1202 11:13:55.916345 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/extract-content/0.log" Dec 02 11:13:56 crc kubenswrapper[4711]: I1202 11:13:56.046035 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/extract-utilities/0.log" Dec 02 11:13:56 crc kubenswrapper[4711]: I1202 11:13:56.149502 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/extract-content/0.log" Dec 02 11:13:56 crc kubenswrapper[4711]: I1202 11:13:56.281914 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-z4fwd_bc27e106-dc06-4326-9cc4-99ca9b5206bb/marketplace-operator/0.log" Dec 02 11:13:56 crc kubenswrapper[4711]: I1202 11:13:56.370823 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/extract-utilities/0.log" Dec 02 11:13:56 crc kubenswrapper[4711]: I1202 11:13:56.417049 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/registry-server/0.log" Dec 02 11:13:56 crc kubenswrapper[4711]: I1202 11:13:56.541523 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/extract-content/0.log" Dec 02 11:13:56 crc kubenswrapper[4711]: I1202 11:13:56.568162 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/extract-utilities/0.log" Dec 02 11:13:56 crc kubenswrapper[4711]: I1202 11:13:56.582211 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/extract-content/0.log" Dec 02 11:13:56 crc kubenswrapper[4711]: I1202 11:13:56.761768 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/extract-utilities/0.log" Dec 02 11:13:56 crc kubenswrapper[4711]: I1202 11:13:56.800923 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/extract-content/0.log" Dec 02 11:13:56 crc kubenswrapper[4711]: I1202 11:13:56.957354 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/registry-server/0.log" Dec 02 11:13:56 crc kubenswrapper[4711]: I1202 11:13:56.991144 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/extract-utilities/0.log" Dec 02 11:13:57 crc kubenswrapper[4711]: I1202 11:13:57.166322 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/extract-content/0.log" Dec 02 11:13:57 crc kubenswrapper[4711]: I1202 11:13:57.186630 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/extract-utilities/0.log" Dec 02 11:13:57 crc kubenswrapper[4711]: I1202 11:13:57.220709 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/extract-content/0.log" Dec 02 11:13:57 crc kubenswrapper[4711]: I1202 11:13:57.430931 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/extract-utilities/0.log" Dec 02 11:13:57 crc kubenswrapper[4711]: I1202 11:13:57.433761 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/extract-content/0.log" Dec 02 11:13:57 crc kubenswrapper[4711]: I1202 11:13:57.857293 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/registry-server/0.log" Dec 02 11:14:21 crc kubenswrapper[4711]: E1202 11:14:21.980741 4711 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.249:51676->38.129.56.249:46835: write tcp 38.129.56.249:51676->38.129.56.249:46835: write: broken pipe Dec 02 11:14:22 crc kubenswrapper[4711]: I1202 11:14:22.586275 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:14:22 crc kubenswrapper[4711]: I1202 11:14:22.586353 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:14:22 crc kubenswrapper[4711]: I1202 11:14:22.586405 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 11:14:22 crc kubenswrapper[4711]: I1202 11:14:22.587156 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:14:22 crc kubenswrapper[4711]: I1202 11:14:22.587213 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" gracePeriod=600 Dec 02 11:14:22 crc kubenswrapper[4711]: E1202 11:14:22.728846 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:14:22 crc kubenswrapper[4711]: E1202 11:14:22.748598 4711 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.249:51482->38.129.56.249:46835: read tcp 38.129.56.249:51482->38.129.56.249:46835: read: connection reset by peer Dec 02 11:14:23 crc kubenswrapper[4711]: I1202 11:14:23.180880 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" exitCode=0 Dec 02 11:14:23 crc kubenswrapper[4711]: I1202 11:14:23.180925 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530"} Dec 02 11:14:23 crc kubenswrapper[4711]: I1202 11:14:23.180970 4711 scope.go:117] "RemoveContainer" containerID="bcb5ee66f74bc7fa910b7fd68c8b56646b704858f1b9b5bdf111d82410fbe2fc" Dec 02 11:14:23 crc kubenswrapper[4711]: I1202 11:14:23.181590 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:14:23 crc kubenswrapper[4711]: E1202 11:14:23.181817 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:14:39 crc kubenswrapper[4711]: I1202 11:14:39.079058 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:14:39 crc kubenswrapper[4711]: E1202 11:14:39.080138 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:14:54 crc kubenswrapper[4711]: I1202 11:14:54.077876 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:14:54 crc kubenswrapper[4711]: E1202 11:14:54.079587 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.185990 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp"] Dec 02 11:15:00 crc kubenswrapper[4711]: E1202 11:15:00.187376 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd253910-a1e6-4ff4-8a08-2e64e89f64aa" containerName="extract-utilities" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.187425 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd253910-a1e6-4ff4-8a08-2e64e89f64aa" containerName="extract-utilities" Dec 02 11:15:00 crc kubenswrapper[4711]: E1202 11:15:00.187481 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd253910-a1e6-4ff4-8a08-2e64e89f64aa" containerName="extract-content" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.187498 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd253910-a1e6-4ff4-8a08-2e64e89f64aa" containerName="extract-content" Dec 02 11:15:00 crc kubenswrapper[4711]: E1202 11:15:00.187548 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd253910-a1e6-4ff4-8a08-2e64e89f64aa" containerName="registry-server" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.187565 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd253910-a1e6-4ff4-8a08-2e64e89f64aa" containerName="registry-server" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.188038 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd253910-a1e6-4ff4-8a08-2e64e89f64aa" containerName="registry-server" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.189382 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.192143 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.192815 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.210399 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp"] Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.320251 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-secret-volume\") pod \"collect-profiles-29411235-4zbmp\" (UID: \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.320542 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-config-volume\") pod \"collect-profiles-29411235-4zbmp\" (UID: \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.320574 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5l48\" (UniqueName: \"kubernetes.io/projected/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-kube-api-access-n5l48\") pod \"collect-profiles-29411235-4zbmp\" (UID: \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.422403 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-config-volume\") pod \"collect-profiles-29411235-4zbmp\" (UID: \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.422698 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5l48\" (UniqueName: \"kubernetes.io/projected/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-kube-api-access-n5l48\") pod \"collect-profiles-29411235-4zbmp\" (UID: \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.423040 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-secret-volume\") pod \"collect-profiles-29411235-4zbmp\" (UID: \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.423592 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-config-volume\") pod \"collect-profiles-29411235-4zbmp\" (UID: \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.440918 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-secret-volume\") pod \"collect-profiles-29411235-4zbmp\" (UID: \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.446613 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5l48\" (UniqueName: \"kubernetes.io/projected/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-kube-api-access-n5l48\") pod \"collect-profiles-29411235-4zbmp\" (UID: \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.509924 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:00 crc kubenswrapper[4711]: I1202 11:15:00.998503 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp"] Dec 02 11:15:00 crc kubenswrapper[4711]: W1202 11:15:00.999414 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff0e3926_4ec9_4a1b_94ee_cb75b8c4f193.slice/crio-274c66482f470b7a293393e805e45878ca42de7e6c9646d891e1c58861930f10 WatchSource:0}: Error finding container 274c66482f470b7a293393e805e45878ca42de7e6c9646d891e1c58861930f10: Status 404 returned error can't find the container with id 274c66482f470b7a293393e805e45878ca42de7e6c9646d891e1c58861930f10 Dec 02 11:15:01 crc kubenswrapper[4711]: I1202 11:15:01.570355 4711 generic.go:334] "Generic (PLEG): container finished" podID="ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193" containerID="550192bd36571dfe604f33d2160f75c13f5bb1f021c323822f8b684823fdc9d9" exitCode=0 Dec 02 11:15:01 crc kubenswrapper[4711]: I1202 11:15:01.570456 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" event={"ID":"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193","Type":"ContainerDied","Data":"550192bd36571dfe604f33d2160f75c13f5bb1f021c323822f8b684823fdc9d9"} Dec 02 11:15:01 crc kubenswrapper[4711]: I1202 11:15:01.570731 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" event={"ID":"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193","Type":"ContainerStarted","Data":"274c66482f470b7a293393e805e45878ca42de7e6c9646d891e1c58861930f10"} Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.000605 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.100571 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5l48\" (UniqueName: \"kubernetes.io/projected/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-kube-api-access-n5l48\") pod \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\" (UID: \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\") " Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.100819 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-config-volume\") pod \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\" (UID: \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\") " Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.100867 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-secret-volume\") pod \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\" (UID: \"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193\") " Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.104372 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-config-volume" (OuterVolumeSpecName: "config-volume") pod "ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193" (UID: "ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.154176 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193" (UID: "ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.203323 4711 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.203365 4711 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.232789 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-kube-api-access-n5l48" (OuterVolumeSpecName: "kube-api-access-n5l48") pod "ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193" (UID: "ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193"). InnerVolumeSpecName "kube-api-access-n5l48". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.304164 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5l48\" (UniqueName: \"kubernetes.io/projected/ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193-kube-api-access-n5l48\") on node \"crc\" DevicePath \"\"" Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.598249 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" event={"ID":"ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193","Type":"ContainerDied","Data":"274c66482f470b7a293393e805e45878ca42de7e6c9646d891e1c58861930f10"} Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.598308 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="274c66482f470b7a293393e805e45878ca42de7e6c9646d891e1c58861930f10" Dec 02 11:15:03 crc kubenswrapper[4711]: I1202 11:15:03.598314 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-4zbmp" Dec 02 11:15:04 crc kubenswrapper[4711]: I1202 11:15:04.084354 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m"] Dec 02 11:15:04 crc kubenswrapper[4711]: I1202 11:15:04.101188 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411190-jgx5m"] Dec 02 11:15:05 crc kubenswrapper[4711]: I1202 11:15:05.078333 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:15:05 crc kubenswrapper[4711]: E1202 11:15:05.078773 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:15:05 crc kubenswrapper[4711]: I1202 11:15:05.091113 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6ff6c5-8745-4f8d-83af-83ded31e5f85" path="/var/lib/kubelet/pods/0c6ff6c5-8745-4f8d-83af-83ded31e5f85/volumes" Dec 02 11:15:19 crc kubenswrapper[4711]: I1202 11:15:19.082915 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:15:19 crc kubenswrapper[4711]: E1202 11:15:19.084833 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:15:30 crc kubenswrapper[4711]: I1202 11:15:30.078209 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:15:30 crc kubenswrapper[4711]: E1202 11:15:30.079484 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:15:37 crc kubenswrapper[4711]: I1202 11:15:37.231363 4711 generic.go:334] "Generic (PLEG): container finished" podID="bbb0b11b-c436-4c78-bb82-ecac75bb40ab" containerID="c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d" exitCode=0 Dec 02 11:15:37 crc kubenswrapper[4711]: I1202 11:15:37.231454 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2zv5/must-gather-pfnmv" event={"ID":"bbb0b11b-c436-4c78-bb82-ecac75bb40ab","Type":"ContainerDied","Data":"c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d"} Dec 02 11:15:37 crc kubenswrapper[4711]: I1202 11:15:37.233705 4711 scope.go:117] "RemoveContainer" containerID="c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d" Dec 02 11:15:37 crc kubenswrapper[4711]: I1202 11:15:37.561350 4711 scope.go:117] "RemoveContainer" containerID="31fd1083de2f86e4a7c22037ff483048dd71d9faebaf7b28b1e4d031a55e8e56" Dec 02 11:15:37 crc kubenswrapper[4711]: I1202 11:15:37.778232 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j2zv5_must-gather-pfnmv_bbb0b11b-c436-4c78-bb82-ecac75bb40ab/gather/0.log" Dec 02 11:15:45 crc kubenswrapper[4711]: I1202 11:15:45.080436 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:15:45 crc kubenswrapper[4711]: E1202 11:15:45.081697 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:15:46 crc kubenswrapper[4711]: I1202 11:15:46.357822 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j2zv5/must-gather-pfnmv"] Dec 02 11:15:46 crc kubenswrapper[4711]: I1202 11:15:46.358105 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-j2zv5/must-gather-pfnmv" podUID="bbb0b11b-c436-4c78-bb82-ecac75bb40ab" containerName="copy" containerID="cri-o://6d5d2796579123a68c420c12bd394623e6a924f35eae5f49aa7b6b3252335def" gracePeriod=2 Dec 02 11:15:46 crc kubenswrapper[4711]: I1202 11:15:46.389619 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j2zv5/must-gather-pfnmv"] Dec 02 11:15:46 crc kubenswrapper[4711]: I1202 11:15:46.805808 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j2zv5_must-gather-pfnmv_bbb0b11b-c436-4c78-bb82-ecac75bb40ab/copy/0.log" Dec 02 11:15:46 crc kubenswrapper[4711]: I1202 11:15:46.806691 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/must-gather-pfnmv" Dec 02 11:15:46 crc kubenswrapper[4711]: I1202 11:15:46.941104 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fbw7\" (UniqueName: \"kubernetes.io/projected/bbb0b11b-c436-4c78-bb82-ecac75bb40ab-kube-api-access-2fbw7\") pod \"bbb0b11b-c436-4c78-bb82-ecac75bb40ab\" (UID: \"bbb0b11b-c436-4c78-bb82-ecac75bb40ab\") " Dec 02 11:15:46 crc kubenswrapper[4711]: I1202 11:15:46.941700 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bbb0b11b-c436-4c78-bb82-ecac75bb40ab-must-gather-output\") pod \"bbb0b11b-c436-4c78-bb82-ecac75bb40ab\" (UID: \"bbb0b11b-c436-4c78-bb82-ecac75bb40ab\") " Dec 02 11:15:46 crc kubenswrapper[4711]: I1202 11:15:46.946867 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb0b11b-c436-4c78-bb82-ecac75bb40ab-kube-api-access-2fbw7" (OuterVolumeSpecName: "kube-api-access-2fbw7") pod "bbb0b11b-c436-4c78-bb82-ecac75bb40ab" (UID: "bbb0b11b-c436-4c78-bb82-ecac75bb40ab"). InnerVolumeSpecName "kube-api-access-2fbw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:15:47 crc kubenswrapper[4711]: I1202 11:15:47.044210 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fbw7\" (UniqueName: \"kubernetes.io/projected/bbb0b11b-c436-4c78-bb82-ecac75bb40ab-kube-api-access-2fbw7\") on node \"crc\" DevicePath \"\"" Dec 02 11:15:47 crc kubenswrapper[4711]: I1202 11:15:47.120374 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbb0b11b-c436-4c78-bb82-ecac75bb40ab-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bbb0b11b-c436-4c78-bb82-ecac75bb40ab" (UID: "bbb0b11b-c436-4c78-bb82-ecac75bb40ab"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:15:47 crc kubenswrapper[4711]: I1202 11:15:47.147510 4711 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bbb0b11b-c436-4c78-bb82-ecac75bb40ab-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 02 11:15:47 crc kubenswrapper[4711]: I1202 11:15:47.383701 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j2zv5_must-gather-pfnmv_bbb0b11b-c436-4c78-bb82-ecac75bb40ab/copy/0.log" Dec 02 11:15:47 crc kubenswrapper[4711]: I1202 11:15:47.384371 4711 generic.go:334] "Generic (PLEG): container finished" podID="bbb0b11b-c436-4c78-bb82-ecac75bb40ab" containerID="6d5d2796579123a68c420c12bd394623e6a924f35eae5f49aa7b6b3252335def" exitCode=143 Dec 02 11:15:47 crc kubenswrapper[4711]: I1202 11:15:47.384455 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2zv5/must-gather-pfnmv" Dec 02 11:15:47 crc kubenswrapper[4711]: I1202 11:15:47.384498 4711 scope.go:117] "RemoveContainer" containerID="6d5d2796579123a68c420c12bd394623e6a924f35eae5f49aa7b6b3252335def" Dec 02 11:15:47 crc kubenswrapper[4711]: I1202 11:15:47.415407 4711 scope.go:117] "RemoveContainer" containerID="c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d" Dec 02 11:15:47 crc kubenswrapper[4711]: I1202 11:15:47.494087 4711 scope.go:117] "RemoveContainer" containerID="6d5d2796579123a68c420c12bd394623e6a924f35eae5f49aa7b6b3252335def" Dec 02 11:15:47 crc kubenswrapper[4711]: E1202 11:15:47.494695 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d5d2796579123a68c420c12bd394623e6a924f35eae5f49aa7b6b3252335def\": container with ID starting with 6d5d2796579123a68c420c12bd394623e6a924f35eae5f49aa7b6b3252335def not found: ID does not exist" containerID="6d5d2796579123a68c420c12bd394623e6a924f35eae5f49aa7b6b3252335def" Dec 02 11:15:47 crc kubenswrapper[4711]: I1202 11:15:47.494730 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d5d2796579123a68c420c12bd394623e6a924f35eae5f49aa7b6b3252335def"} err="failed to get container status \"6d5d2796579123a68c420c12bd394623e6a924f35eae5f49aa7b6b3252335def\": rpc error: code = NotFound desc = could not find container \"6d5d2796579123a68c420c12bd394623e6a924f35eae5f49aa7b6b3252335def\": container with ID starting with 6d5d2796579123a68c420c12bd394623e6a924f35eae5f49aa7b6b3252335def not found: ID does not exist" Dec 02 11:15:47 crc kubenswrapper[4711]: I1202 11:15:47.494751 4711 scope.go:117] "RemoveContainer" containerID="c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d" Dec 02 11:15:47 crc kubenswrapper[4711]: E1202 11:15:47.495587 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d\": container with ID starting with c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d not found: ID does not exist" containerID="c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d" Dec 02 11:15:47 crc kubenswrapper[4711]: I1202 11:15:47.495615 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d"} err="failed to get container status \"c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d\": rpc error: code = NotFound desc = could not find container \"c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d\": container with ID starting with c4387ee1be31d34fb301d8a72d88a645761488e5b610affc494cff9e3eafed5d not found: ID does not exist" Dec 02 11:15:49 crc kubenswrapper[4711]: I1202 11:15:49.093176 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbb0b11b-c436-4c78-bb82-ecac75bb40ab" path="/var/lib/kubelet/pods/bbb0b11b-c436-4c78-bb82-ecac75bb40ab/volumes" Dec 02 11:15:58 crc kubenswrapper[4711]: I1202 11:15:58.078942 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:15:58 crc kubenswrapper[4711]: E1202 11:15:58.079929 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:16:11 crc kubenswrapper[4711]: I1202 11:16:11.092311 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:16:11 crc kubenswrapper[4711]: E1202 11:16:11.093372 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:16:26 crc kubenswrapper[4711]: I1202 11:16:26.078878 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:16:26 crc kubenswrapper[4711]: E1202 11:16:26.079649 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:16:37 crc kubenswrapper[4711]: I1202 11:16:37.080179 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:16:37 crc kubenswrapper[4711]: E1202 11:16:37.081203 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.099429 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hqm4t"] Dec 02 11:16:46 crc kubenswrapper[4711]: E1202 11:16:46.101319 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb0b11b-c436-4c78-bb82-ecac75bb40ab" containerName="gather" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.101387 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb0b11b-c436-4c78-bb82-ecac75bb40ab" containerName="gather" Dec 02 11:16:46 crc kubenswrapper[4711]: E1202 11:16:46.101452 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb0b11b-c436-4c78-bb82-ecac75bb40ab" containerName="copy" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.101470 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb0b11b-c436-4c78-bb82-ecac75bb40ab" containerName="copy" Dec 02 11:16:46 crc kubenswrapper[4711]: E1202 11:16:46.101503 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193" containerName="collect-profiles" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.101523 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193" containerName="collect-profiles" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.102001 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbb0b11b-c436-4c78-bb82-ecac75bb40ab" containerName="copy" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.102052 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbb0b11b-c436-4c78-bb82-ecac75bb40ab" containerName="gather" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.102079 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff0e3926-4ec9-4a1b-94ee-cb75b8c4f193" containerName="collect-profiles" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.106032 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.126232 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqm4t"] Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.191772 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89786\" (UniqueName: \"kubernetes.io/projected/1f71c11e-8f82-45e9-a2f2-849287a40e54-kube-api-access-89786\") pod \"redhat-operators-hqm4t\" (UID: \"1f71c11e-8f82-45e9-a2f2-849287a40e54\") " pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.192181 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f71c11e-8f82-45e9-a2f2-849287a40e54-utilities\") pod \"redhat-operators-hqm4t\" (UID: \"1f71c11e-8f82-45e9-a2f2-849287a40e54\") " pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.192315 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f71c11e-8f82-45e9-a2f2-849287a40e54-catalog-content\") pod \"redhat-operators-hqm4t\" (UID: \"1f71c11e-8f82-45e9-a2f2-849287a40e54\") " pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.294445 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f71c11e-8f82-45e9-a2f2-849287a40e54-catalog-content\") pod \"redhat-operators-hqm4t\" (UID: \"1f71c11e-8f82-45e9-a2f2-849287a40e54\") " pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.294539 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89786\" (UniqueName: \"kubernetes.io/projected/1f71c11e-8f82-45e9-a2f2-849287a40e54-kube-api-access-89786\") pod \"redhat-operators-hqm4t\" (UID: \"1f71c11e-8f82-45e9-a2f2-849287a40e54\") " pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.294666 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f71c11e-8f82-45e9-a2f2-849287a40e54-utilities\") pod \"redhat-operators-hqm4t\" (UID: \"1f71c11e-8f82-45e9-a2f2-849287a40e54\") " pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.294980 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f71c11e-8f82-45e9-a2f2-849287a40e54-catalog-content\") pod \"redhat-operators-hqm4t\" (UID: \"1f71c11e-8f82-45e9-a2f2-849287a40e54\") " pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.295069 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f71c11e-8f82-45e9-a2f2-849287a40e54-utilities\") pod \"redhat-operators-hqm4t\" (UID: \"1f71c11e-8f82-45e9-a2f2-849287a40e54\") " pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.320371 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89786\" (UniqueName: \"kubernetes.io/projected/1f71c11e-8f82-45e9-a2f2-849287a40e54-kube-api-access-89786\") pod \"redhat-operators-hqm4t\" (UID: \"1f71c11e-8f82-45e9-a2f2-849287a40e54\") " pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.438091 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:46 crc kubenswrapper[4711]: I1202 11:16:46.955576 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqm4t"] Dec 02 11:16:47 crc kubenswrapper[4711]: I1202 11:16:47.273986 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqm4t" event={"ID":"1f71c11e-8f82-45e9-a2f2-849287a40e54","Type":"ContainerStarted","Data":"87662f05afa5b0cc8eee3afdfe7245519ab4a133032bd4e9b1b64e4067af2fb7"} Dec 02 11:16:48 crc kubenswrapper[4711]: I1202 11:16:48.291602 4711 generic.go:334] "Generic (PLEG): container finished" podID="1f71c11e-8f82-45e9-a2f2-849287a40e54" containerID="746136fcbe6bd9d76de8d6e2247b794a7fcf7491b8585e8248b0ccf30cd443d8" exitCode=0 Dec 02 11:16:48 crc kubenswrapper[4711]: I1202 11:16:48.291730 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqm4t" event={"ID":"1f71c11e-8f82-45e9-a2f2-849287a40e54","Type":"ContainerDied","Data":"746136fcbe6bd9d76de8d6e2247b794a7fcf7491b8585e8248b0ccf30cd443d8"} Dec 02 11:16:48 crc kubenswrapper[4711]: I1202 11:16:48.296066 4711 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:16:50 crc kubenswrapper[4711]: I1202 11:16:50.078545 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:16:50 crc kubenswrapper[4711]: E1202 11:16:50.079436 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:16:50 crc kubenswrapper[4711]: I1202 11:16:50.323438 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqm4t" event={"ID":"1f71c11e-8f82-45e9-a2f2-849287a40e54","Type":"ContainerStarted","Data":"e158bebc466dd19716ee4690d0ace977219335f56368c2233d7576f79960bead"} Dec 02 11:16:51 crc kubenswrapper[4711]: I1202 11:16:51.337501 4711 generic.go:334] "Generic (PLEG): container finished" podID="1f71c11e-8f82-45e9-a2f2-849287a40e54" containerID="e158bebc466dd19716ee4690d0ace977219335f56368c2233d7576f79960bead" exitCode=0 Dec 02 11:16:51 crc kubenswrapper[4711]: I1202 11:16:51.337612 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqm4t" event={"ID":"1f71c11e-8f82-45e9-a2f2-849287a40e54","Type":"ContainerDied","Data":"e158bebc466dd19716ee4690d0ace977219335f56368c2233d7576f79960bead"} Dec 02 11:16:52 crc kubenswrapper[4711]: I1202 11:16:52.356901 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqm4t" event={"ID":"1f71c11e-8f82-45e9-a2f2-849287a40e54","Type":"ContainerStarted","Data":"42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24"} Dec 02 11:16:52 crc kubenswrapper[4711]: I1202 11:16:52.382977 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hqm4t" podStartSLOduration=2.759416378 podStartE2EDuration="6.382904788s" podCreationTimestamp="2025-12-02 11:16:46 +0000 UTC" firstStartedPulling="2025-12-02 11:16:48.295395724 +0000 UTC m=+3798.004762201" lastFinishedPulling="2025-12-02 11:16:51.918884164 +0000 UTC m=+3801.628250611" observedRunningTime="2025-12-02 11:16:52.379307021 +0000 UTC m=+3802.088673468" watchObservedRunningTime="2025-12-02 11:16:52.382904788 +0000 UTC m=+3802.092271265" Dec 02 11:16:56 crc kubenswrapper[4711]: I1202 11:16:56.439454 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:56 crc kubenswrapper[4711]: I1202 11:16:56.439889 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:16:57 crc kubenswrapper[4711]: I1202 11:16:57.536732 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqm4t" podUID="1f71c11e-8f82-45e9-a2f2-849287a40e54" containerName="registry-server" probeResult="failure" output=< Dec 02 11:16:57 crc kubenswrapper[4711]: timeout: failed to connect service ":50051" within 1s Dec 02 11:16:57 crc kubenswrapper[4711]: > Dec 02 11:17:04 crc kubenswrapper[4711]: I1202 11:17:04.079307 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:17:04 crc kubenswrapper[4711]: E1202 11:17:04.079996 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:17:06 crc kubenswrapper[4711]: I1202 11:17:06.517446 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:17:06 crc kubenswrapper[4711]: I1202 11:17:06.584812 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:17:06 crc kubenswrapper[4711]: I1202 11:17:06.769635 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqm4t"] Dec 02 11:17:08 crc kubenswrapper[4711]: I1202 11:17:08.540847 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hqm4t" podUID="1f71c11e-8f82-45e9-a2f2-849287a40e54" containerName="registry-server" containerID="cri-o://42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24" gracePeriod=2 Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.094704 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.194089 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89786\" (UniqueName: \"kubernetes.io/projected/1f71c11e-8f82-45e9-a2f2-849287a40e54-kube-api-access-89786\") pod \"1f71c11e-8f82-45e9-a2f2-849287a40e54\" (UID: \"1f71c11e-8f82-45e9-a2f2-849287a40e54\") " Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.194245 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f71c11e-8f82-45e9-a2f2-849287a40e54-utilities\") pod \"1f71c11e-8f82-45e9-a2f2-849287a40e54\" (UID: \"1f71c11e-8f82-45e9-a2f2-849287a40e54\") " Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.194301 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f71c11e-8f82-45e9-a2f2-849287a40e54-catalog-content\") pod \"1f71c11e-8f82-45e9-a2f2-849287a40e54\" (UID: \"1f71c11e-8f82-45e9-a2f2-849287a40e54\") " Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.195706 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f71c11e-8f82-45e9-a2f2-849287a40e54-utilities" (OuterVolumeSpecName: "utilities") pod "1f71c11e-8f82-45e9-a2f2-849287a40e54" (UID: "1f71c11e-8f82-45e9-a2f2-849287a40e54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.203106 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f71c11e-8f82-45e9-a2f2-849287a40e54-kube-api-access-89786" (OuterVolumeSpecName: "kube-api-access-89786") pod "1f71c11e-8f82-45e9-a2f2-849287a40e54" (UID: "1f71c11e-8f82-45e9-a2f2-849287a40e54"). InnerVolumeSpecName "kube-api-access-89786". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.297073 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89786\" (UniqueName: \"kubernetes.io/projected/1f71c11e-8f82-45e9-a2f2-849287a40e54-kube-api-access-89786\") on node \"crc\" DevicePath \"\"" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.297432 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f71c11e-8f82-45e9-a2f2-849287a40e54-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.373040 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f71c11e-8f82-45e9-a2f2-849287a40e54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f71c11e-8f82-45e9-a2f2-849287a40e54" (UID: "1f71c11e-8f82-45e9-a2f2-849287a40e54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.398895 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f71c11e-8f82-45e9-a2f2-849287a40e54-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.560227 4711 generic.go:334] "Generic (PLEG): container finished" podID="1f71c11e-8f82-45e9-a2f2-849287a40e54" containerID="42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24" exitCode=0 Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.560314 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqm4t" event={"ID":"1f71c11e-8f82-45e9-a2f2-849287a40e54","Type":"ContainerDied","Data":"42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24"} Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.560437 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqm4t" event={"ID":"1f71c11e-8f82-45e9-a2f2-849287a40e54","Type":"ContainerDied","Data":"87662f05afa5b0cc8eee3afdfe7245519ab4a133032bd4e9b1b64e4067af2fb7"} Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.560431 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqm4t" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.560461 4711 scope.go:117] "RemoveContainer" containerID="42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.598918 4711 scope.go:117] "RemoveContainer" containerID="e158bebc466dd19716ee4690d0ace977219335f56368c2233d7576f79960bead" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.630049 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqm4t"] Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.639989 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hqm4t"] Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.649971 4711 scope.go:117] "RemoveContainer" containerID="746136fcbe6bd9d76de8d6e2247b794a7fcf7491b8585e8248b0ccf30cd443d8" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.705315 4711 scope.go:117] "RemoveContainer" containerID="42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24" Dec 02 11:17:09 crc kubenswrapper[4711]: E1202 11:17:09.705898 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24\": container with ID starting with 42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24 not found: ID does not exist" containerID="42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.705975 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24"} err="failed to get container status \"42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24\": rpc error: code = NotFound desc = could not find container \"42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24\": container with ID starting with 42f57ee01d541246d28f30701b055f051f34027a281b8f71cecd175ef5a25d24 not found: ID does not exist" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.706011 4711 scope.go:117] "RemoveContainer" containerID="e158bebc466dd19716ee4690d0ace977219335f56368c2233d7576f79960bead" Dec 02 11:17:09 crc kubenswrapper[4711]: E1202 11:17:09.706396 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e158bebc466dd19716ee4690d0ace977219335f56368c2233d7576f79960bead\": container with ID starting with e158bebc466dd19716ee4690d0ace977219335f56368c2233d7576f79960bead not found: ID does not exist" containerID="e158bebc466dd19716ee4690d0ace977219335f56368c2233d7576f79960bead" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.706436 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e158bebc466dd19716ee4690d0ace977219335f56368c2233d7576f79960bead"} err="failed to get container status \"e158bebc466dd19716ee4690d0ace977219335f56368c2233d7576f79960bead\": rpc error: code = NotFound desc = could not find container \"e158bebc466dd19716ee4690d0ace977219335f56368c2233d7576f79960bead\": container with ID starting with e158bebc466dd19716ee4690d0ace977219335f56368c2233d7576f79960bead not found: ID does not exist" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.706459 4711 scope.go:117] "RemoveContainer" containerID="746136fcbe6bd9d76de8d6e2247b794a7fcf7491b8585e8248b0ccf30cd443d8" Dec 02 11:17:09 crc kubenswrapper[4711]: E1202 11:17:09.706802 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"746136fcbe6bd9d76de8d6e2247b794a7fcf7491b8585e8248b0ccf30cd443d8\": container with ID starting with 746136fcbe6bd9d76de8d6e2247b794a7fcf7491b8585e8248b0ccf30cd443d8 not found: ID does not exist" containerID="746136fcbe6bd9d76de8d6e2247b794a7fcf7491b8585e8248b0ccf30cd443d8" Dec 02 11:17:09 crc kubenswrapper[4711]: I1202 11:17:09.706832 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746136fcbe6bd9d76de8d6e2247b794a7fcf7491b8585e8248b0ccf30cd443d8"} err="failed to get container status \"746136fcbe6bd9d76de8d6e2247b794a7fcf7491b8585e8248b0ccf30cd443d8\": rpc error: code = NotFound desc = could not find container \"746136fcbe6bd9d76de8d6e2247b794a7fcf7491b8585e8248b0ccf30cd443d8\": container with ID starting with 746136fcbe6bd9d76de8d6e2247b794a7fcf7491b8585e8248b0ccf30cd443d8 not found: ID does not exist" Dec 02 11:17:11 crc kubenswrapper[4711]: I1202 11:17:11.101140 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f71c11e-8f82-45e9-a2f2-849287a40e54" path="/var/lib/kubelet/pods/1f71c11e-8f82-45e9-a2f2-849287a40e54/volumes" Dec 02 11:17:18 crc kubenswrapper[4711]: I1202 11:17:18.078858 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:17:18 crc kubenswrapper[4711]: E1202 11:17:18.079803 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:17:31 crc kubenswrapper[4711]: I1202 11:17:31.088656 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:17:31 crc kubenswrapper[4711]: E1202 11:17:31.089324 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:17:37 crc kubenswrapper[4711]: I1202 11:17:37.744506 4711 scope.go:117] "RemoveContainer" containerID="c70b9a11f3e02b0be693261ae5a8f4451927ba3b31afe0f058c274f821f9007b" Dec 02 11:17:42 crc kubenswrapper[4711]: I1202 11:17:42.078524 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:17:42 crc kubenswrapper[4711]: E1202 11:17:42.079370 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:17:57 crc kubenswrapper[4711]: I1202 11:17:57.079477 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:17:57 crc kubenswrapper[4711]: E1202 11:17:57.081110 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:18:12 crc kubenswrapper[4711]: I1202 11:18:12.079484 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:18:12 crc kubenswrapper[4711]: E1202 11:18:12.080659 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:18:27 crc kubenswrapper[4711]: I1202 11:18:27.079379 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:18:27 crc kubenswrapper[4711]: E1202 11:18:27.080481 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:18:37 crc kubenswrapper[4711]: I1202 11:18:37.858362 4711 scope.go:117] "RemoveContainer" containerID="388cabcfe5dfb13e2115fea592c15cc85e9a24b92ae89d996ed73aff942d1de1" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.702256 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kr647/must-gather-bxxxc"] Dec 02 11:18:38 crc kubenswrapper[4711]: E1202 11:18:38.702817 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f71c11e-8f82-45e9-a2f2-849287a40e54" containerName="registry-server" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.702834 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f71c11e-8f82-45e9-a2f2-849287a40e54" containerName="registry-server" Dec 02 11:18:38 crc kubenswrapper[4711]: E1202 11:18:38.702855 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f71c11e-8f82-45e9-a2f2-849287a40e54" containerName="extract-content" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.702862 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f71c11e-8f82-45e9-a2f2-849287a40e54" containerName="extract-content" Dec 02 11:18:38 crc kubenswrapper[4711]: E1202 11:18:38.702886 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f71c11e-8f82-45e9-a2f2-849287a40e54" containerName="extract-utilities" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.702893 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f71c11e-8f82-45e9-a2f2-849287a40e54" containerName="extract-utilities" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.703104 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f71c11e-8f82-45e9-a2f2-849287a40e54" containerName="registry-server" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.704111 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/must-gather-bxxxc" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.710495 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kr647"/"openshift-service-ca.crt" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.710522 4711 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kr647"/"default-dockercfg-d6ndw" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.710756 4711 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kr647"/"kube-root-ca.crt" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.713123 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kr647/must-gather-bxxxc"] Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.868491 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cabdab0c-1ff8-4185-b7a7-3785a1eef79b-must-gather-output\") pod \"must-gather-bxxxc\" (UID: \"cabdab0c-1ff8-4185-b7a7-3785a1eef79b\") " pod="openshift-must-gather-kr647/must-gather-bxxxc" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.868618 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-975vg\" (UniqueName: \"kubernetes.io/projected/cabdab0c-1ff8-4185-b7a7-3785a1eef79b-kube-api-access-975vg\") pod \"must-gather-bxxxc\" (UID: \"cabdab0c-1ff8-4185-b7a7-3785a1eef79b\") " pod="openshift-must-gather-kr647/must-gather-bxxxc" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.970089 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cabdab0c-1ff8-4185-b7a7-3785a1eef79b-must-gather-output\") pod \"must-gather-bxxxc\" (UID: \"cabdab0c-1ff8-4185-b7a7-3785a1eef79b\") " pod="openshift-must-gather-kr647/must-gather-bxxxc" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.970208 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-975vg\" (UniqueName: \"kubernetes.io/projected/cabdab0c-1ff8-4185-b7a7-3785a1eef79b-kube-api-access-975vg\") pod \"must-gather-bxxxc\" (UID: \"cabdab0c-1ff8-4185-b7a7-3785a1eef79b\") " pod="openshift-must-gather-kr647/must-gather-bxxxc" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.970885 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cabdab0c-1ff8-4185-b7a7-3785a1eef79b-must-gather-output\") pod \"must-gather-bxxxc\" (UID: \"cabdab0c-1ff8-4185-b7a7-3785a1eef79b\") " pod="openshift-must-gather-kr647/must-gather-bxxxc" Dec 02 11:18:38 crc kubenswrapper[4711]: I1202 11:18:38.990701 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-975vg\" (UniqueName: \"kubernetes.io/projected/cabdab0c-1ff8-4185-b7a7-3785a1eef79b-kube-api-access-975vg\") pod \"must-gather-bxxxc\" (UID: \"cabdab0c-1ff8-4185-b7a7-3785a1eef79b\") " pod="openshift-must-gather-kr647/must-gather-bxxxc" Dec 02 11:18:39 crc kubenswrapper[4711]: I1202 11:18:39.025734 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/must-gather-bxxxc" Dec 02 11:18:39 crc kubenswrapper[4711]: I1202 11:18:39.490587 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kr647/must-gather-bxxxc"] Dec 02 11:18:39 crc kubenswrapper[4711]: I1202 11:18:39.652736 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr647/must-gather-bxxxc" event={"ID":"cabdab0c-1ff8-4185-b7a7-3785a1eef79b","Type":"ContainerStarted","Data":"395812f79f5729c87d8f0e0cd958eb4bd6ef303f3108c99bdda2c9daa837ddcd"} Dec 02 11:18:40 crc kubenswrapper[4711]: I1202 11:18:40.662156 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr647/must-gather-bxxxc" event={"ID":"cabdab0c-1ff8-4185-b7a7-3785a1eef79b","Type":"ContainerStarted","Data":"a52b2a9e2bf3dd611833319ed54617b4141941334e9eb55a72f5d02c7fd93fa0"} Dec 02 11:18:40 crc kubenswrapper[4711]: I1202 11:18:40.662713 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr647/must-gather-bxxxc" event={"ID":"cabdab0c-1ff8-4185-b7a7-3785a1eef79b","Type":"ContainerStarted","Data":"f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c"} Dec 02 11:18:40 crc kubenswrapper[4711]: I1202 11:18:40.684803 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kr647/must-gather-bxxxc" podStartSLOduration=2.684786763 podStartE2EDuration="2.684786763s" podCreationTimestamp="2025-12-02 11:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:18:40.679477611 +0000 UTC m=+3910.388844068" watchObservedRunningTime="2025-12-02 11:18:40.684786763 +0000 UTC m=+3910.394153210" Dec 02 11:18:42 crc kubenswrapper[4711]: I1202 11:18:42.079780 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:18:42 crc kubenswrapper[4711]: E1202 11:18:42.085445 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:18:43 crc kubenswrapper[4711]: I1202 11:18:43.409269 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kr647/crc-debug-4q5hw"] Dec 02 11:18:43 crc kubenswrapper[4711]: I1202 11:18:43.410741 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-4q5hw" Dec 02 11:18:43 crc kubenswrapper[4711]: I1202 11:18:43.570583 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56-host\") pod \"crc-debug-4q5hw\" (UID: \"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56\") " pod="openshift-must-gather-kr647/crc-debug-4q5hw" Dec 02 11:18:43 crc kubenswrapper[4711]: I1202 11:18:43.570718 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bqj5\" (UniqueName: \"kubernetes.io/projected/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56-kube-api-access-8bqj5\") pod \"crc-debug-4q5hw\" (UID: \"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56\") " pod="openshift-must-gather-kr647/crc-debug-4q5hw" Dec 02 11:18:43 crc kubenswrapper[4711]: I1202 11:18:43.673068 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56-host\") pod \"crc-debug-4q5hw\" (UID: \"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56\") " pod="openshift-must-gather-kr647/crc-debug-4q5hw" Dec 02 11:18:43 crc kubenswrapper[4711]: I1202 11:18:43.673172 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bqj5\" (UniqueName: \"kubernetes.io/projected/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56-kube-api-access-8bqj5\") pod \"crc-debug-4q5hw\" (UID: \"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56\") " pod="openshift-must-gather-kr647/crc-debug-4q5hw" Dec 02 11:18:43 crc kubenswrapper[4711]: I1202 11:18:43.673251 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56-host\") pod \"crc-debug-4q5hw\" (UID: \"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56\") " pod="openshift-must-gather-kr647/crc-debug-4q5hw" Dec 02 11:18:43 crc kubenswrapper[4711]: I1202 11:18:43.693717 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bqj5\" (UniqueName: \"kubernetes.io/projected/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56-kube-api-access-8bqj5\") pod \"crc-debug-4q5hw\" (UID: \"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56\") " pod="openshift-must-gather-kr647/crc-debug-4q5hw" Dec 02 11:18:43 crc kubenswrapper[4711]: I1202 11:18:43.730715 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-4q5hw" Dec 02 11:18:43 crc kubenswrapper[4711]: W1202 11:18:43.767092 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb8178a7_2a07_4aa7_afc0_6ab63e11eb56.slice/crio-270134c305d90f555c4a3d97690c55268110db389ebfb95513222483de2705d7 WatchSource:0}: Error finding container 270134c305d90f555c4a3d97690c55268110db389ebfb95513222483de2705d7: Status 404 returned error can't find the container with id 270134c305d90f555c4a3d97690c55268110db389ebfb95513222483de2705d7 Dec 02 11:18:44 crc kubenswrapper[4711]: I1202 11:18:44.702244 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr647/crc-debug-4q5hw" event={"ID":"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56","Type":"ContainerStarted","Data":"4f001e835c1b61f480a53380da1aa11173a3759b2a010432fcd52129746d8f07"} Dec 02 11:18:44 crc kubenswrapper[4711]: I1202 11:18:44.702724 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr647/crc-debug-4q5hw" event={"ID":"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56","Type":"ContainerStarted","Data":"270134c305d90f555c4a3d97690c55268110db389ebfb95513222483de2705d7"} Dec 02 11:18:44 crc kubenswrapper[4711]: I1202 11:18:44.716583 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kr647/crc-debug-4q5hw" podStartSLOduration=1.716566397 podStartE2EDuration="1.716566397s" podCreationTimestamp="2025-12-02 11:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:18:44.714651406 +0000 UTC m=+3914.424017853" watchObservedRunningTime="2025-12-02 11:18:44.716566397 +0000 UTC m=+3914.425932844" Dec 02 11:18:54 crc kubenswrapper[4711]: I1202 11:18:54.078910 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:18:54 crc kubenswrapper[4711]: E1202 11:18:54.079641 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:19:07 crc kubenswrapper[4711]: I1202 11:19:07.078716 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:19:07 crc kubenswrapper[4711]: E1202 11:19:07.079734 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:19:16 crc kubenswrapper[4711]: I1202 11:19:16.016886 4711 generic.go:334] "Generic (PLEG): container finished" podID="cb8178a7-2a07-4aa7-afc0-6ab63e11eb56" containerID="4f001e835c1b61f480a53380da1aa11173a3759b2a010432fcd52129746d8f07" exitCode=0 Dec 02 11:19:16 crc kubenswrapper[4711]: I1202 11:19:16.017010 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr647/crc-debug-4q5hw" event={"ID":"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56","Type":"ContainerDied","Data":"4f001e835c1b61f480a53380da1aa11173a3759b2a010432fcd52129746d8f07"} Dec 02 11:19:17 crc kubenswrapper[4711]: I1202 11:19:17.163293 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-4q5hw" Dec 02 11:19:17 crc kubenswrapper[4711]: I1202 11:19:17.198279 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kr647/crc-debug-4q5hw"] Dec 02 11:19:17 crc kubenswrapper[4711]: I1202 11:19:17.211624 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kr647/crc-debug-4q5hw"] Dec 02 11:19:17 crc kubenswrapper[4711]: I1202 11:19:17.273326 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bqj5\" (UniqueName: \"kubernetes.io/projected/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56-kube-api-access-8bqj5\") pod \"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56\" (UID: \"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56\") " Dec 02 11:19:17 crc kubenswrapper[4711]: I1202 11:19:17.273514 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56-host\") pod \"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56\" (UID: \"cb8178a7-2a07-4aa7-afc0-6ab63e11eb56\") " Dec 02 11:19:17 crc kubenswrapper[4711]: I1202 11:19:17.273609 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56-host" (OuterVolumeSpecName: "host") pod "cb8178a7-2a07-4aa7-afc0-6ab63e11eb56" (UID: "cb8178a7-2a07-4aa7-afc0-6ab63e11eb56"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:19:17 crc kubenswrapper[4711]: I1202 11:19:17.274262 4711 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56-host\") on node \"crc\" DevicePath \"\"" Dec 02 11:19:17 crc kubenswrapper[4711]: I1202 11:19:17.280265 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56-kube-api-access-8bqj5" (OuterVolumeSpecName: "kube-api-access-8bqj5") pod "cb8178a7-2a07-4aa7-afc0-6ab63e11eb56" (UID: "cb8178a7-2a07-4aa7-afc0-6ab63e11eb56"). InnerVolumeSpecName "kube-api-access-8bqj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:19:17 crc kubenswrapper[4711]: I1202 11:19:17.375395 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bqj5\" (UniqueName: \"kubernetes.io/projected/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56-kube-api-access-8bqj5\") on node \"crc\" DevicePath \"\"" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.048702 4711 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="270134c305d90f555c4a3d97690c55268110db389ebfb95513222483de2705d7" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.048751 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-4q5hw" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.367994 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kr647/crc-debug-blkz2"] Dec 02 11:19:18 crc kubenswrapper[4711]: E1202 11:19:18.368452 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8178a7-2a07-4aa7-afc0-6ab63e11eb56" containerName="container-00" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.368472 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8178a7-2a07-4aa7-afc0-6ab63e11eb56" containerName="container-00" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.368672 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8178a7-2a07-4aa7-afc0-6ab63e11eb56" containerName="container-00" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.369387 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-blkz2" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.391615 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pt4n\" (UniqueName: \"kubernetes.io/projected/bae2a98a-d17a-4b34-9da8-339390e2efab-kube-api-access-8pt4n\") pod \"crc-debug-blkz2\" (UID: \"bae2a98a-d17a-4b34-9da8-339390e2efab\") " pod="openshift-must-gather-kr647/crc-debug-blkz2" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.391654 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae2a98a-d17a-4b34-9da8-339390e2efab-host\") pod \"crc-debug-blkz2\" (UID: \"bae2a98a-d17a-4b34-9da8-339390e2efab\") " pod="openshift-must-gather-kr647/crc-debug-blkz2" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.493770 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pt4n\" (UniqueName: \"kubernetes.io/projected/bae2a98a-d17a-4b34-9da8-339390e2efab-kube-api-access-8pt4n\") pod \"crc-debug-blkz2\" (UID: \"bae2a98a-d17a-4b34-9da8-339390e2efab\") " pod="openshift-must-gather-kr647/crc-debug-blkz2" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.493909 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae2a98a-d17a-4b34-9da8-339390e2efab-host\") pod \"crc-debug-blkz2\" (UID: \"bae2a98a-d17a-4b34-9da8-339390e2efab\") " pod="openshift-must-gather-kr647/crc-debug-blkz2" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.494081 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae2a98a-d17a-4b34-9da8-339390e2efab-host\") pod \"crc-debug-blkz2\" (UID: \"bae2a98a-d17a-4b34-9da8-339390e2efab\") " pod="openshift-must-gather-kr647/crc-debug-blkz2" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.514599 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pt4n\" (UniqueName: \"kubernetes.io/projected/bae2a98a-d17a-4b34-9da8-339390e2efab-kube-api-access-8pt4n\") pod \"crc-debug-blkz2\" (UID: \"bae2a98a-d17a-4b34-9da8-339390e2efab\") " pod="openshift-must-gather-kr647/crc-debug-blkz2" Dec 02 11:19:18 crc kubenswrapper[4711]: I1202 11:19:18.693146 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-blkz2" Dec 02 11:19:19 crc kubenswrapper[4711]: I1202 11:19:19.058153 4711 generic.go:334] "Generic (PLEG): container finished" podID="bae2a98a-d17a-4b34-9da8-339390e2efab" containerID="1a0834d57f7c98d90a8e8cb1f367e6d6b1ed142aca4f7248fe28c28d1e6bf358" exitCode=0 Dec 02 11:19:19 crc kubenswrapper[4711]: I1202 11:19:19.058220 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr647/crc-debug-blkz2" event={"ID":"bae2a98a-d17a-4b34-9da8-339390e2efab","Type":"ContainerDied","Data":"1a0834d57f7c98d90a8e8cb1f367e6d6b1ed142aca4f7248fe28c28d1e6bf358"} Dec 02 11:19:19 crc kubenswrapper[4711]: I1202 11:19:19.058513 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr647/crc-debug-blkz2" event={"ID":"bae2a98a-d17a-4b34-9da8-339390e2efab","Type":"ContainerStarted","Data":"f16bcfd96dedb7764f8fa6158a13e37fd30641173dbdf04b4f3bda8148351811"} Dec 02 11:19:19 crc kubenswrapper[4711]: I1202 11:19:19.078112 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:19:19 crc kubenswrapper[4711]: E1202 11:19:19.078426 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:19:19 crc kubenswrapper[4711]: I1202 11:19:19.090320 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8178a7-2a07-4aa7-afc0-6ab63e11eb56" path="/var/lib/kubelet/pods/cb8178a7-2a07-4aa7-afc0-6ab63e11eb56/volumes" Dec 02 11:19:19 crc kubenswrapper[4711]: I1202 11:19:19.493713 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kr647/crc-debug-blkz2"] Dec 02 11:19:19 crc kubenswrapper[4711]: I1202 11:19:19.502991 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kr647/crc-debug-blkz2"] Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.181910 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-blkz2" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.222056 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae2a98a-d17a-4b34-9da8-339390e2efab-host\") pod \"bae2a98a-d17a-4b34-9da8-339390e2efab\" (UID: \"bae2a98a-d17a-4b34-9da8-339390e2efab\") " Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.222186 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bae2a98a-d17a-4b34-9da8-339390e2efab-host" (OuterVolumeSpecName: "host") pod "bae2a98a-d17a-4b34-9da8-339390e2efab" (UID: "bae2a98a-d17a-4b34-9da8-339390e2efab"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.222365 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pt4n\" (UniqueName: \"kubernetes.io/projected/bae2a98a-d17a-4b34-9da8-339390e2efab-kube-api-access-8pt4n\") pod \"bae2a98a-d17a-4b34-9da8-339390e2efab\" (UID: \"bae2a98a-d17a-4b34-9da8-339390e2efab\") " Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.222900 4711 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae2a98a-d17a-4b34-9da8-339390e2efab-host\") on node \"crc\" DevicePath \"\"" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.237324 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae2a98a-d17a-4b34-9da8-339390e2efab-kube-api-access-8pt4n" (OuterVolumeSpecName: "kube-api-access-8pt4n") pod "bae2a98a-d17a-4b34-9da8-339390e2efab" (UID: "bae2a98a-d17a-4b34-9da8-339390e2efab"). InnerVolumeSpecName "kube-api-access-8pt4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.324263 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pt4n\" (UniqueName: \"kubernetes.io/projected/bae2a98a-d17a-4b34-9da8-339390e2efab-kube-api-access-8pt4n\") on node \"crc\" DevicePath \"\"" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.735591 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kr647/crc-debug-nmfwv"] Dec 02 11:19:20 crc kubenswrapper[4711]: E1202 11:19:20.736001 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae2a98a-d17a-4b34-9da8-339390e2efab" containerName="container-00" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.736014 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae2a98a-d17a-4b34-9da8-339390e2efab" containerName="container-00" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.736222 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae2a98a-d17a-4b34-9da8-339390e2efab" containerName="container-00" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.736997 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-nmfwv" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.832737 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd2g2\" (UniqueName: \"kubernetes.io/projected/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3-kube-api-access-jd2g2\") pod \"crc-debug-nmfwv\" (UID: \"dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3\") " pod="openshift-must-gather-kr647/crc-debug-nmfwv" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.832916 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3-host\") pod \"crc-debug-nmfwv\" (UID: \"dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3\") " pod="openshift-must-gather-kr647/crc-debug-nmfwv" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.934068 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3-host\") pod \"crc-debug-nmfwv\" (UID: \"dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3\") " pod="openshift-must-gather-kr647/crc-debug-nmfwv" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.934187 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd2g2\" (UniqueName: \"kubernetes.io/projected/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3-kube-api-access-jd2g2\") pod \"crc-debug-nmfwv\" (UID: \"dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3\") " pod="openshift-must-gather-kr647/crc-debug-nmfwv" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.934262 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3-host\") pod \"crc-debug-nmfwv\" (UID: \"dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3\") " pod="openshift-must-gather-kr647/crc-debug-nmfwv" Dec 02 11:19:20 crc kubenswrapper[4711]: I1202 11:19:20.957722 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd2g2\" (UniqueName: \"kubernetes.io/projected/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3-kube-api-access-jd2g2\") pod \"crc-debug-nmfwv\" (UID: \"dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3\") " pod="openshift-must-gather-kr647/crc-debug-nmfwv" Dec 02 11:19:21 crc kubenswrapper[4711]: I1202 11:19:21.054696 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-nmfwv" Dec 02 11:19:21 crc kubenswrapper[4711]: I1202 11:19:21.087766 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-blkz2" Dec 02 11:19:21 crc kubenswrapper[4711]: I1202 11:19:21.106449 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae2a98a-d17a-4b34-9da8-339390e2efab" path="/var/lib/kubelet/pods/bae2a98a-d17a-4b34-9da8-339390e2efab/volumes" Dec 02 11:19:21 crc kubenswrapper[4711]: I1202 11:19:21.107605 4711 scope.go:117] "RemoveContainer" containerID="1a0834d57f7c98d90a8e8cb1f367e6d6b1ed142aca4f7248fe28c28d1e6bf358" Dec 02 11:19:22 crc kubenswrapper[4711]: I1202 11:19:22.096622 4711 generic.go:334] "Generic (PLEG): container finished" podID="dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3" containerID="d9a99512ea24808231a90920d43a26332678836a41b184c02bfb8a0a2f20b0f0" exitCode=0 Dec 02 11:19:22 crc kubenswrapper[4711]: I1202 11:19:22.096696 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr647/crc-debug-nmfwv" event={"ID":"dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3","Type":"ContainerDied","Data":"d9a99512ea24808231a90920d43a26332678836a41b184c02bfb8a0a2f20b0f0"} Dec 02 11:19:22 crc kubenswrapper[4711]: I1202 11:19:22.098481 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr647/crc-debug-nmfwv" event={"ID":"dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3","Type":"ContainerStarted","Data":"ff9e00952bc532dff24139ea97342275e24590877a3830a79c53216cd1a5742e"} Dec 02 11:19:22 crc kubenswrapper[4711]: I1202 11:19:22.149908 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kr647/crc-debug-nmfwv"] Dec 02 11:19:22 crc kubenswrapper[4711]: I1202 11:19:22.162128 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kr647/crc-debug-nmfwv"] Dec 02 11:19:23 crc kubenswrapper[4711]: I1202 11:19:23.225912 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-nmfwv" Dec 02 11:19:23 crc kubenswrapper[4711]: I1202 11:19:23.396232 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd2g2\" (UniqueName: \"kubernetes.io/projected/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3-kube-api-access-jd2g2\") pod \"dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3\" (UID: \"dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3\") " Dec 02 11:19:23 crc kubenswrapper[4711]: I1202 11:19:23.397242 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3-host\") pod \"dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3\" (UID: \"dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3\") " Dec 02 11:19:23 crc kubenswrapper[4711]: I1202 11:19:23.397345 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3-host" (OuterVolumeSpecName: "host") pod "dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3" (UID: "dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:19:23 crc kubenswrapper[4711]: I1202 11:19:23.400114 4711 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3-host\") on node \"crc\" DevicePath \"\"" Dec 02 11:19:23 crc kubenswrapper[4711]: I1202 11:19:23.403015 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3-kube-api-access-jd2g2" (OuterVolumeSpecName: "kube-api-access-jd2g2") pod "dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3" (UID: "dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3"). InnerVolumeSpecName "kube-api-access-jd2g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:19:23 crc kubenswrapper[4711]: I1202 11:19:23.503662 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd2g2\" (UniqueName: \"kubernetes.io/projected/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3-kube-api-access-jd2g2\") on node \"crc\" DevicePath \"\"" Dec 02 11:19:24 crc kubenswrapper[4711]: I1202 11:19:24.126802 4711 scope.go:117] "RemoveContainer" containerID="d9a99512ea24808231a90920d43a26332678836a41b184c02bfb8a0a2f20b0f0" Dec 02 11:19:24 crc kubenswrapper[4711]: I1202 11:19:24.126873 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-nmfwv" Dec 02 11:19:25 crc kubenswrapper[4711]: I1202 11:19:25.091409 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3" path="/var/lib/kubelet/pods/dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3/volumes" Dec 02 11:19:34 crc kubenswrapper[4711]: I1202 11:19:34.078394 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:19:35 crc kubenswrapper[4711]: I1202 11:19:35.248356 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"291d751448235148339342bfa37f282b1f70919e0f764ee9dfb17ad5ee0636ca"} Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.411148 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lddq7"] Dec 02 11:19:38 crc kubenswrapper[4711]: E1202 11:19:38.412259 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3" containerName="container-00" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.412276 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3" containerName="container-00" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.412526 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca4ddd9-61b5-47e6-9ddb-aa7ff30909e3" containerName="container-00" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.414341 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.443254 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lddq7"] Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.510510 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d8td\" (UniqueName: \"kubernetes.io/projected/b93c36ae-2def-4586-96e5-de52675d79a7-kube-api-access-7d8td\") pod \"redhat-marketplace-lddq7\" (UID: \"b93c36ae-2def-4586-96e5-de52675d79a7\") " pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.510823 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93c36ae-2def-4586-96e5-de52675d79a7-utilities\") pod \"redhat-marketplace-lddq7\" (UID: \"b93c36ae-2def-4586-96e5-de52675d79a7\") " pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.510930 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93c36ae-2def-4586-96e5-de52675d79a7-catalog-content\") pod \"redhat-marketplace-lddq7\" (UID: \"b93c36ae-2def-4586-96e5-de52675d79a7\") " pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.612358 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93c36ae-2def-4586-96e5-de52675d79a7-utilities\") pod \"redhat-marketplace-lddq7\" (UID: \"b93c36ae-2def-4586-96e5-de52675d79a7\") " pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.612416 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93c36ae-2def-4586-96e5-de52675d79a7-catalog-content\") pod \"redhat-marketplace-lddq7\" (UID: \"b93c36ae-2def-4586-96e5-de52675d79a7\") " pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.612530 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d8td\" (UniqueName: \"kubernetes.io/projected/b93c36ae-2def-4586-96e5-de52675d79a7-kube-api-access-7d8td\") pod \"redhat-marketplace-lddq7\" (UID: \"b93c36ae-2def-4586-96e5-de52675d79a7\") " pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.613814 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93c36ae-2def-4586-96e5-de52675d79a7-catalog-content\") pod \"redhat-marketplace-lddq7\" (UID: \"b93c36ae-2def-4586-96e5-de52675d79a7\") " pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.613833 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93c36ae-2def-4586-96e5-de52675d79a7-utilities\") pod \"redhat-marketplace-lddq7\" (UID: \"b93c36ae-2def-4586-96e5-de52675d79a7\") " pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.637074 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d8td\" (UniqueName: \"kubernetes.io/projected/b93c36ae-2def-4586-96e5-de52675d79a7-kube-api-access-7d8td\") pod \"redhat-marketplace-lddq7\" (UID: \"b93c36ae-2def-4586-96e5-de52675d79a7\") " pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:38 crc kubenswrapper[4711]: I1202 11:19:38.744094 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:39 crc kubenswrapper[4711]: I1202 11:19:39.283543 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lddq7"] Dec 02 11:19:40 crc kubenswrapper[4711]: I1202 11:19:40.292407 4711 generic.go:334] "Generic (PLEG): container finished" podID="b93c36ae-2def-4586-96e5-de52675d79a7" containerID="9bae408d758c2765ccfe6c14f62a16a0cef39bf346914310ff5a6b2420302586" exitCode=0 Dec 02 11:19:40 crc kubenswrapper[4711]: I1202 11:19:40.292522 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lddq7" event={"ID":"b93c36ae-2def-4586-96e5-de52675d79a7","Type":"ContainerDied","Data":"9bae408d758c2765ccfe6c14f62a16a0cef39bf346914310ff5a6b2420302586"} Dec 02 11:19:40 crc kubenswrapper[4711]: I1202 11:19:40.292880 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lddq7" event={"ID":"b93c36ae-2def-4586-96e5-de52675d79a7","Type":"ContainerStarted","Data":"5dfafe631f3157f13375b689be3f63f795288b9bf8f86ea7d55af5eb448ab882"} Dec 02 11:19:41 crc kubenswrapper[4711]: I1202 11:19:41.304085 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lddq7" event={"ID":"b93c36ae-2def-4586-96e5-de52675d79a7","Type":"ContainerStarted","Data":"c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd"} Dec 02 11:19:41 crc kubenswrapper[4711]: E1202 11:19:41.914081 4711 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb93c36ae_2def_4586_96e5_de52675d79a7.slice/crio-conmon-c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd.scope\": RecentStats: unable to find data in memory cache]" Dec 02 11:19:42 crc kubenswrapper[4711]: I1202 11:19:42.314941 4711 generic.go:334] "Generic (PLEG): container finished" podID="b93c36ae-2def-4586-96e5-de52675d79a7" containerID="c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd" exitCode=0 Dec 02 11:19:42 crc kubenswrapper[4711]: I1202 11:19:42.315067 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lddq7" event={"ID":"b93c36ae-2def-4586-96e5-de52675d79a7","Type":"ContainerDied","Data":"c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd"} Dec 02 11:19:43 crc kubenswrapper[4711]: I1202 11:19:43.326168 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lddq7" event={"ID":"b93c36ae-2def-4586-96e5-de52675d79a7","Type":"ContainerStarted","Data":"c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168"} Dec 02 11:19:43 crc kubenswrapper[4711]: I1202 11:19:43.349431 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lddq7" podStartSLOduration=2.71991088 podStartE2EDuration="5.349382259s" podCreationTimestamp="2025-12-02 11:19:38 +0000 UTC" firstStartedPulling="2025-12-02 11:19:40.295218445 +0000 UTC m=+3970.004584892" lastFinishedPulling="2025-12-02 11:19:42.924689824 +0000 UTC m=+3972.634056271" observedRunningTime="2025-12-02 11:19:43.342074932 +0000 UTC m=+3973.051441379" watchObservedRunningTime="2025-12-02 11:19:43.349382259 +0000 UTC m=+3973.058748706" Dec 02 11:19:48 crc kubenswrapper[4711]: I1202 11:19:48.744400 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:48 crc kubenswrapper[4711]: I1202 11:19:48.745381 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:48 crc kubenswrapper[4711]: I1202 11:19:48.808279 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:49 crc kubenswrapper[4711]: I1202 11:19:49.436979 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:50 crc kubenswrapper[4711]: I1202 11:19:50.019883 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-fddf747c8-8wktl_429cc017-c93c-4d8a-b5eb-819eb6fde287/barbican-api/0.log" Dec 02 11:19:50 crc kubenswrapper[4711]: I1202 11:19:50.174056 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-fddf747c8-8wktl_429cc017-c93c-4d8a-b5eb-819eb6fde287/barbican-api-log/0.log" Dec 02 11:19:50 crc kubenswrapper[4711]: I1202 11:19:50.327412 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c5c78fc8b-7bz2t_2cacc030-0a08-4dab-96e4-a024aa16faa6/barbican-keystone-listener-log/0.log" Dec 02 11:19:50 crc kubenswrapper[4711]: I1202 11:19:50.355329 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c5c78fc8b-7bz2t_2cacc030-0a08-4dab-96e4-a024aa16faa6/barbican-keystone-listener/0.log" Dec 02 11:19:50 crc kubenswrapper[4711]: I1202 11:19:50.524286 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d4b47d497-gjzqt_c55fa7c4-9945-4651-bf4b-9ad1b94e6047/barbican-worker/0.log" Dec 02 11:19:50 crc kubenswrapper[4711]: I1202 11:19:50.535674 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d4b47d497-gjzqt_c55fa7c4-9945-4651-bf4b-9ad1b94e6047/barbican-worker-log/0.log" Dec 02 11:19:50 crc kubenswrapper[4711]: I1202 11:19:50.633546 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-ff9fh_4a833e50-6d25-4593-b413-ceb01d516010/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:50 crc kubenswrapper[4711]: I1202 11:19:50.770019 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe8c836c-181d-4c74-8cfc-7e66357bed76/ceilometer-central-agent/0.log" Dec 02 11:19:50 crc kubenswrapper[4711]: I1202 11:19:50.838496 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe8c836c-181d-4c74-8cfc-7e66357bed76/proxy-httpd/0.log" Dec 02 11:19:50 crc kubenswrapper[4711]: I1202 11:19:50.857596 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe8c836c-181d-4c74-8cfc-7e66357bed76/ceilometer-notification-agent/0.log" Dec 02 11:19:50 crc kubenswrapper[4711]: I1202 11:19:50.997234 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lddq7"] Dec 02 11:19:51 crc kubenswrapper[4711]: I1202 11:19:51.113413 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d1652776-ac6b-4033-a6b0-e0272ce72b34/cinder-api/0.log" Dec 02 11:19:51 crc kubenswrapper[4711]: I1202 11:19:51.160172 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe8c836c-181d-4c74-8cfc-7e66357bed76/sg-core/0.log" Dec 02 11:19:51 crc kubenswrapper[4711]: I1202 11:19:51.225542 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d1652776-ac6b-4033-a6b0-e0272ce72b34/cinder-api-log/0.log" Dec 02 11:19:51 crc kubenswrapper[4711]: I1202 11:19:51.231039 4711 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podbae2a98a-d17a-4b34-9da8-339390e2efab"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podbae2a98a-d17a-4b34-9da8-339390e2efab] : Timed out while waiting for systemd to remove kubepods-besteffort-podbae2a98a_d17a_4b34_9da8_339390e2efab.slice" Dec 02 11:19:51 crc kubenswrapper[4711]: E1202 11:19:51.231164 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podbae2a98a-d17a-4b34-9da8-339390e2efab] : unable to destroy cgroup paths for cgroup [kubepods besteffort podbae2a98a-d17a-4b34-9da8-339390e2efab] : Timed out while waiting for systemd to remove kubepods-besteffort-podbae2a98a_d17a_4b34_9da8_339390e2efab.slice" pod="openshift-must-gather-kr647/crc-debug-blkz2" podUID="bae2a98a-d17a-4b34-9da8-339390e2efab" Dec 02 11:19:51 crc kubenswrapper[4711]: I1202 11:19:51.408987 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/crc-debug-blkz2" Dec 02 11:19:51 crc kubenswrapper[4711]: I1202 11:19:51.409523 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lddq7" podUID="b93c36ae-2def-4586-96e5-de52675d79a7" containerName="registry-server" containerID="cri-o://c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168" gracePeriod=2 Dec 02 11:19:51 crc kubenswrapper[4711]: I1202 11:19:51.436441 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5a4b1357-91ed-4ef7-85f5-9b52085ce952/cinder-scheduler/0.log" Dec 02 11:19:51 crc kubenswrapper[4711]: I1202 11:19:51.452372 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5a4b1357-91ed-4ef7-85f5-9b52085ce952/probe/0.log" Dec 02 11:19:51 crc kubenswrapper[4711]: I1202 11:19:51.671979 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-bd4lq_c9425d80-55ad-4f08-acd8-4389676e9b71/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:51 crc kubenswrapper[4711]: I1202 11:19:51.728443 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-pg6w5_6cc0043c-689a-4c2f-b70f-a4a3c5344385/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:51 crc kubenswrapper[4711]: I1202 11:19:51.873320 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-59gqw_e33eabd6-6a5d-4d49-b0db-3d31fcb6f171/init/0.log" Dec 02 11:19:51 crc kubenswrapper[4711]: I1202 11:19:51.994258 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.056779 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93c36ae-2def-4586-96e5-de52675d79a7-utilities\") pod \"b93c36ae-2def-4586-96e5-de52675d79a7\" (UID: \"b93c36ae-2def-4586-96e5-de52675d79a7\") " Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.056847 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93c36ae-2def-4586-96e5-de52675d79a7-catalog-content\") pod \"b93c36ae-2def-4586-96e5-de52675d79a7\" (UID: \"b93c36ae-2def-4586-96e5-de52675d79a7\") " Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.058049 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93c36ae-2def-4586-96e5-de52675d79a7-utilities" (OuterVolumeSpecName: "utilities") pod "b93c36ae-2def-4586-96e5-de52675d79a7" (UID: "b93c36ae-2def-4586-96e5-de52675d79a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.093243 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93c36ae-2def-4586-96e5-de52675d79a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b93c36ae-2def-4586-96e5-de52675d79a7" (UID: "b93c36ae-2def-4586-96e5-de52675d79a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.096861 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-59gqw_e33eabd6-6a5d-4d49-b0db-3d31fcb6f171/init/0.log" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.133798 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-59gqw_e33eabd6-6a5d-4d49-b0db-3d31fcb6f171/dnsmasq-dns/0.log" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.153978 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-8m4qt_1a61e5f0-3651-4c39-aec6-5c6ae688a94c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.158189 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d8td\" (UniqueName: \"kubernetes.io/projected/b93c36ae-2def-4586-96e5-de52675d79a7-kube-api-access-7d8td\") pod \"b93c36ae-2def-4586-96e5-de52675d79a7\" (UID: \"b93c36ae-2def-4586-96e5-de52675d79a7\") " Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.158941 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93c36ae-2def-4586-96e5-de52675d79a7-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.159035 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93c36ae-2def-4586-96e5-de52675d79a7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.185197 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93c36ae-2def-4586-96e5-de52675d79a7-kube-api-access-7d8td" (OuterVolumeSpecName: "kube-api-access-7d8td") pod "b93c36ae-2def-4586-96e5-de52675d79a7" (UID: "b93c36ae-2def-4586-96e5-de52675d79a7"). InnerVolumeSpecName "kube-api-access-7d8td". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.261974 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d8td\" (UniqueName: \"kubernetes.io/projected/b93c36ae-2def-4586-96e5-de52675d79a7-kube-api-access-7d8td\") on node \"crc\" DevicePath \"\"" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.358128 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9e688916-64de-415a-86d9-b54a42d3174d/glance-log/0.log" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.382680 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9e688916-64de-415a-86d9-b54a42d3174d/glance-httpd/0.log" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.418675 4711 generic.go:334] "Generic (PLEG): container finished" podID="b93c36ae-2def-4586-96e5-de52675d79a7" containerID="c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168" exitCode=0 Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.418721 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lddq7" event={"ID":"b93c36ae-2def-4586-96e5-de52675d79a7","Type":"ContainerDied","Data":"c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168"} Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.418749 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lddq7" event={"ID":"b93c36ae-2def-4586-96e5-de52675d79a7","Type":"ContainerDied","Data":"5dfafe631f3157f13375b689be3f63f795288b9bf8f86ea7d55af5eb448ab882"} Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.418764 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lddq7" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.418770 4711 scope.go:117] "RemoveContainer" containerID="c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.441335 4711 scope.go:117] "RemoveContainer" containerID="c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.452516 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lddq7"] Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.459856 4711 scope.go:117] "RemoveContainer" containerID="9bae408d758c2765ccfe6c14f62a16a0cef39bf346914310ff5a6b2420302586" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.469969 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lddq7"] Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.508005 4711 scope.go:117] "RemoveContainer" containerID="c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168" Dec 02 11:19:52 crc kubenswrapper[4711]: E1202 11:19:52.508398 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168\": container with ID starting with c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168 not found: ID does not exist" containerID="c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.508444 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168"} err="failed to get container status \"c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168\": rpc error: code = NotFound desc = could not find container \"c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168\": container with ID starting with c232e18f59f68b159c2fb78e35dc67cafe6b37f173474dcc699f634534da6168 not found: ID does not exist" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.508471 4711 scope.go:117] "RemoveContainer" containerID="c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd" Dec 02 11:19:52 crc kubenswrapper[4711]: E1202 11:19:52.508854 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd\": container with ID starting with c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd not found: ID does not exist" containerID="c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.508884 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd"} err="failed to get container status \"c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd\": rpc error: code = NotFound desc = could not find container \"c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd\": container with ID starting with c62ce0f5b9fb3412ff899a1de6623f60e89a09a063902a3597fa8d237550ecdd not found: ID does not exist" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.508902 4711 scope.go:117] "RemoveContainer" containerID="9bae408d758c2765ccfe6c14f62a16a0cef39bf346914310ff5a6b2420302586" Dec 02 11:19:52 crc kubenswrapper[4711]: E1202 11:19:52.510385 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bae408d758c2765ccfe6c14f62a16a0cef39bf346914310ff5a6b2420302586\": container with ID starting with 9bae408d758c2765ccfe6c14f62a16a0cef39bf346914310ff5a6b2420302586 not found: ID does not exist" containerID="9bae408d758c2765ccfe6c14f62a16a0cef39bf346914310ff5a6b2420302586" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.510418 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bae408d758c2765ccfe6c14f62a16a0cef39bf346914310ff5a6b2420302586"} err="failed to get container status \"9bae408d758c2765ccfe6c14f62a16a0cef39bf346914310ff5a6b2420302586\": rpc error: code = NotFound desc = could not find container \"9bae408d758c2765ccfe6c14f62a16a0cef39bf346914310ff5a6b2420302586\": container with ID starting with 9bae408d758c2765ccfe6c14f62a16a0cef39bf346914310ff5a6b2420302586 not found: ID does not exist" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.563371 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c9a0c110-a808-440d-ad76-4c1b193f3543/glance-httpd/0.log" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.663509 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c9a0c110-a808-440d-ad76-4c1b193f3543/glance-log/0.log" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.749656 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6b4d9565bd-5nwjn_a5e4731d-0cea-4530-aba2-86777a8db6cb/horizon/0.log" Dec 02 11:19:52 crc kubenswrapper[4711]: I1202 11:19:52.886727 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-nsqbb_d8270f0b-6b4c-4682-bf69-09147b922785/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:53 crc kubenswrapper[4711]: I1202 11:19:53.089425 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93c36ae-2def-4586-96e5-de52675d79a7" path="/var/lib/kubelet/pods/b93c36ae-2def-4586-96e5-de52675d79a7/volumes" Dec 02 11:19:53 crc kubenswrapper[4711]: I1202 11:19:53.128308 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6b4d9565bd-5nwjn_a5e4731d-0cea-4530-aba2-86777a8db6cb/horizon-log/0.log" Dec 02 11:19:53 crc kubenswrapper[4711]: I1202 11:19:53.149763 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-w4jpz_17177c8c-c071-4484-b8e6-2b3c49e8a3e4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:53 crc kubenswrapper[4711]: I1202 11:19:53.372192 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29411221-c27ql_37b4f06f-7175-4bee-85ee-970775ae49a8/keystone-cron/0.log" Dec 02 11:19:53 crc kubenswrapper[4711]: I1202 11:19:53.399508 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6986b467dd-l4plx_805add98-0168-44c8-a35c-dfdd1709a8ae/keystone-api/0.log" Dec 02 11:19:53 crc kubenswrapper[4711]: I1202 11:19:53.615921 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6abaf105-a517-42d9-86c4-5e6cd5527b94/kube-state-metrics/0.log" Dec 02 11:19:53 crc kubenswrapper[4711]: I1202 11:19:53.663021 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-czk8n_48489c70-bfb2-4dbf-b002-1dcdb3da737f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:53 crc kubenswrapper[4711]: I1202 11:19:53.943808 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b445d9db9-64xt2_886c7d1f-5204-436e-a656-68b1ac98b586/neutron-httpd/0.log" Dec 02 11:19:54 crc kubenswrapper[4711]: I1202 11:19:54.004432 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b445d9db9-64xt2_886c7d1f-5204-436e-a656-68b1ac98b586/neutron-api/0.log" Dec 02 11:19:54 crc kubenswrapper[4711]: I1202 11:19:54.129973 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddtlc_18808e54-ca3d-47a8-ae93-d05737319878/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:54 crc kubenswrapper[4711]: I1202 11:19:54.534579 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d6a280ba-4feb-4ffd-8452-a4e7d2c6512b/nova-api-log/0.log" Dec 02 11:19:54 crc kubenswrapper[4711]: I1202 11:19:54.665410 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d878b444-84f7-4c21-b377-91c45878b703/nova-cell0-conductor-conductor/0.log" Dec 02 11:19:54 crc kubenswrapper[4711]: I1202 11:19:54.877107 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d6a280ba-4feb-4ffd-8452-a4e7d2c6512b/nova-api-api/0.log" Dec 02 11:19:55 crc kubenswrapper[4711]: I1202 11:19:55.058818 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8336264b-6d1c-4a37-b329-743ef0e63e48/nova-cell1-novncproxy-novncproxy/0.log" Dec 02 11:19:55 crc kubenswrapper[4711]: I1202 11:19:55.074448 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e9e2b3f8-fb3d-4e53-ad40-f607f87ca8a2/nova-cell1-conductor-conductor/0.log" Dec 02 11:19:55 crc kubenswrapper[4711]: I1202 11:19:55.161321 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-96fdh_45d45e5b-27e6-42bf-863d-e04caf847040/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:55 crc kubenswrapper[4711]: I1202 11:19:55.340935 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c513475d-590a-4821-9ee5-894e9faaef88/nova-metadata-log/0.log" Dec 02 11:19:55 crc kubenswrapper[4711]: I1202 11:19:55.722512 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_426aabb1-9d66-4797-8fd2-3ecf4074192e/nova-scheduler-scheduler/0.log" Dec 02 11:19:55 crc kubenswrapper[4711]: I1202 11:19:55.723641 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1de720bf-9fe1-40cb-888c-1868fbc89f63/mysql-bootstrap/0.log" Dec 02 11:19:55 crc kubenswrapper[4711]: I1202 11:19:55.831006 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1de720bf-9fe1-40cb-888c-1868fbc89f63/mysql-bootstrap/0.log" Dec 02 11:19:55 crc kubenswrapper[4711]: I1202 11:19:55.934308 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1de720bf-9fe1-40cb-888c-1868fbc89f63/galera/0.log" Dec 02 11:19:56 crc kubenswrapper[4711]: I1202 11:19:56.021630 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_12dcc0fa-368d-4a71-99ee-fe27e2cd410a/mysql-bootstrap/0.log" Dec 02 11:19:56 crc kubenswrapper[4711]: I1202 11:19:56.254989 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_12dcc0fa-368d-4a71-99ee-fe27e2cd410a/mysql-bootstrap/0.log" Dec 02 11:19:56 crc kubenswrapper[4711]: I1202 11:19:56.326216 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_12dcc0fa-368d-4a71-99ee-fe27e2cd410a/galera/0.log" Dec 02 11:19:56 crc kubenswrapper[4711]: I1202 11:19:56.462340 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_6da5f746-13e6-4933-8b49-ad17165cfcf0/openstackclient/0.log" Dec 02 11:19:56 crc kubenswrapper[4711]: I1202 11:19:56.565831 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-7vpwc_4a2d3ff8-c766-478e-9fae-105cd7432c09/openstack-network-exporter/0.log" Dec 02 11:19:56 crc kubenswrapper[4711]: I1202 11:19:56.649505 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c513475d-590a-4821-9ee5-894e9faaef88/nova-metadata-metadata/0.log" Dec 02 11:19:56 crc kubenswrapper[4711]: I1202 11:19:56.770360 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lxtbd_82b00f57-beb4-43ad-a1c5-cc9790bb167e/ovsdb-server-init/0.log" Dec 02 11:19:56 crc kubenswrapper[4711]: I1202 11:19:56.986334 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lxtbd_82b00f57-beb4-43ad-a1c5-cc9790bb167e/ovsdb-server-init/0.log" Dec 02 11:19:56 crc kubenswrapper[4711]: I1202 11:19:56.995665 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lxtbd_82b00f57-beb4-43ad-a1c5-cc9790bb167e/ovs-vswitchd/0.log" Dec 02 11:19:57 crc kubenswrapper[4711]: I1202 11:19:57.030751 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lxtbd_82b00f57-beb4-43ad-a1c5-cc9790bb167e/ovsdb-server/0.log" Dec 02 11:19:57 crc kubenswrapper[4711]: I1202 11:19:57.170031 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-q57lb_7ce53b33-b78a-446d-b345-c8d918209ddf/ovn-controller/0.log" Dec 02 11:19:57 crc kubenswrapper[4711]: I1202 11:19:57.313717 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9hrn7_e572720a-5f65-485f-ad5b-76d5f7a782ac/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:57 crc kubenswrapper[4711]: I1202 11:19:57.389453 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_19183ef0-1a98-4d60-96c3-2b15fd8bd2e8/openstack-network-exporter/0.log" Dec 02 11:19:57 crc kubenswrapper[4711]: I1202 11:19:57.436968 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_19183ef0-1a98-4d60-96c3-2b15fd8bd2e8/ovn-northd/0.log" Dec 02 11:19:57 crc kubenswrapper[4711]: I1202 11:19:57.564678 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7fff7494-ee8a-4c45-87de-00444f64be54/openstack-network-exporter/0.log" Dec 02 11:19:57 crc kubenswrapper[4711]: I1202 11:19:57.691165 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7fff7494-ee8a-4c45-87de-00444f64be54/ovsdbserver-nb/0.log" Dec 02 11:19:57 crc kubenswrapper[4711]: I1202 11:19:57.757223 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_99032f62-533c-4fa2-887c-41a25a505906/openstack-network-exporter/0.log" Dec 02 11:19:57 crc kubenswrapper[4711]: I1202 11:19:57.829994 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_99032f62-533c-4fa2-887c-41a25a505906/ovsdbserver-sb/0.log" Dec 02 11:19:57 crc kubenswrapper[4711]: I1202 11:19:57.981850 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6545f6547b-92nrg_34890e20-861e-4023-8029-aff08285be51/placement-api/0.log" Dec 02 11:19:58 crc kubenswrapper[4711]: I1202 11:19:58.037220 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6545f6547b-92nrg_34890e20-861e-4023-8029-aff08285be51/placement-log/0.log" Dec 02 11:19:58 crc kubenswrapper[4711]: I1202 11:19:58.142477 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b642dce9-6793-46ab-9d8a-061c21e965ce/setup-container/0.log" Dec 02 11:19:58 crc kubenswrapper[4711]: I1202 11:19:58.394233 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b642dce9-6793-46ab-9d8a-061c21e965ce/rabbitmq/0.log" Dec 02 11:19:58 crc kubenswrapper[4711]: I1202 11:19:58.396076 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b642dce9-6793-46ab-9d8a-061c21e965ce/setup-container/0.log" Dec 02 11:19:58 crc kubenswrapper[4711]: I1202 11:19:58.450842 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9b7fae8d-6b42-4c76-b0a0-74004c2e5e47/setup-container/0.log" Dec 02 11:19:58 crc kubenswrapper[4711]: I1202 11:19:58.781788 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-7zrtz_5ff31470-e780-4e6a-850a-6cada5050225/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:58 crc kubenswrapper[4711]: I1202 11:19:58.828496 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9b7fae8d-6b42-4c76-b0a0-74004c2e5e47/rabbitmq/0.log" Dec 02 11:19:58 crc kubenswrapper[4711]: I1202 11:19:58.882221 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9b7fae8d-6b42-4c76-b0a0-74004c2e5e47/setup-container/0.log" Dec 02 11:19:59 crc kubenswrapper[4711]: I1202 11:19:59.073721 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-d6hwm_c9824b88-0553-466a-9c0d-07ab1949543a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:59 crc kubenswrapper[4711]: I1202 11:19:59.169520 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-q5txj_2712309c-6014-4332-86b8-d42b5021b6c0/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:59 crc kubenswrapper[4711]: I1202 11:19:59.248014 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-frrkh_ec850345-39cb-45c3-881d-aa6f59cf2c7a/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:19:59 crc kubenswrapper[4711]: I1202 11:19:59.381341 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-t4djr_4a03a125-6a0b-4e81-8df8-48e0085fa9a1/ssh-known-hosts-edpm-deployment/0.log" Dec 02 11:19:59 crc kubenswrapper[4711]: I1202 11:19:59.514292 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8455cffcc7-gvzs8_4bb0ebbe-23dd-4970-bc78-799616ef2e21/proxy-server/0.log" Dec 02 11:19:59 crc kubenswrapper[4711]: I1202 11:19:59.620470 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8455cffcc7-gvzs8_4bb0ebbe-23dd-4970-bc78-799616ef2e21/proxy-httpd/0.log" Dec 02 11:19:59 crc kubenswrapper[4711]: I1202 11:19:59.629924 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-58jtp_848cb525-39ab-47d7-99fc-9fbc249e740a/swift-ring-rebalance/0.log" Dec 02 11:19:59 crc kubenswrapper[4711]: I1202 11:19:59.843723 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/account-reaper/0.log" Dec 02 11:19:59 crc kubenswrapper[4711]: I1202 11:19:59.843785 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/account-auditor/0.log" Dec 02 11:19:59 crc kubenswrapper[4711]: I1202 11:19:59.907159 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/account-replicator/0.log" Dec 02 11:19:59 crc kubenswrapper[4711]: I1202 11:19:59.985808 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/account-server/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.097181 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/container-auditor/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.110200 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/container-server/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.110452 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/container-replicator/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.199334 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/container-updater/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.346891 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/object-expirer/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.379755 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/object-auditor/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.383488 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/object-replicator/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.417616 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/object-server/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.537729 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/rsync/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.636348 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/swift-recon-cron/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.639147 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_23030cd9-0bb2-4574-8c49-405bef4719b5/object-updater/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.775981 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-zwsx5_5192ee19-472c-4f7c-b41d-4a11b518b045/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:20:00 crc kubenswrapper[4711]: I1202 11:20:00.902801 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_725581bd-6264-4ca6-b1fa-126c3c50800b/tempest-tests-tempest-tests-runner/0.log" Dec 02 11:20:01 crc kubenswrapper[4711]: I1202 11:20:01.002659 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_76f6fd71-c403-415b-9402-f3fcd9ab0fd4/test-operator-logs-container/0.log" Dec 02 11:20:01 crc kubenswrapper[4711]: I1202 11:20:01.107172 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-55wpk_c44d97e0-717c-4337-910f-68b93cc653a7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:20:09 crc kubenswrapper[4711]: I1202 11:20:09.342253 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f2ad898b-6abc-49b9-8f12-5e2da28b6479/memcached/0.log" Dec 02 11:20:27 crc kubenswrapper[4711]: I1202 11:20:27.402512 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/util/0.log" Dec 02 11:20:27 crc kubenswrapper[4711]: I1202 11:20:27.628498 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/pull/0.log" Dec 02 11:20:27 crc kubenswrapper[4711]: I1202 11:20:27.654134 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/util/0.log" Dec 02 11:20:27 crc kubenswrapper[4711]: I1202 11:20:27.678791 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/pull/0.log" Dec 02 11:20:27 crc kubenswrapper[4711]: I1202 11:20:27.807774 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/util/0.log" Dec 02 11:20:27 crc kubenswrapper[4711]: I1202 11:20:27.823933 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/pull/0.log" Dec 02 11:20:27 crc kubenswrapper[4711]: I1202 11:20:27.847167 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_09f6aabc490a484675d73a06c24b9a231a62e55f3a8db45f214d1799b87jvfz_01f223a8-15fc-4798-9fed-4f1624424d95/extract/0.log" Dec 02 11:20:27 crc kubenswrapper[4711]: I1202 11:20:27.996322 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-zsv2n_10c23c28-0e51-465d-ba7c-1becd6a7b5ee/kube-rbac-proxy/0.log" Dec 02 11:20:28 crc kubenswrapper[4711]: I1202 11:20:28.074673 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-zsv2n_10c23c28-0e51-465d-ba7c-1becd6a7b5ee/manager/0.log" Dec 02 11:20:28 crc kubenswrapper[4711]: I1202 11:20:28.119712 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-n9x57_d5039117-0162-4158-b6f7-a3dedff319fb/kube-rbac-proxy/0.log" Dec 02 11:20:28 crc kubenswrapper[4711]: I1202 11:20:28.213562 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-n9x57_d5039117-0162-4158-b6f7-a3dedff319fb/manager/0.log" Dec 02 11:20:28 crc kubenswrapper[4711]: I1202 11:20:28.291308 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-ktj75_26eb7b16-7210-459f-baac-e740acdb363e/kube-rbac-proxy/0.log" Dec 02 11:20:28 crc kubenswrapper[4711]: I1202 11:20:28.354921 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-ktj75_26eb7b16-7210-459f-baac-e740acdb363e/manager/0.log" Dec 02 11:20:28 crc kubenswrapper[4711]: I1202 11:20:28.476828 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-pbvd2_a1984464-0dac-491f-a2f7-bc1f9214fef8/kube-rbac-proxy/0.log" Dec 02 11:20:28 crc kubenswrapper[4711]: I1202 11:20:28.591339 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-pbvd2_a1984464-0dac-491f-a2f7-bc1f9214fef8/manager/0.log" Dec 02 11:20:28 crc kubenswrapper[4711]: I1202 11:20:28.763433 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v98q2_8284f010-fa2e-45fd-aa0f-46958a91102b/kube-rbac-proxy/0.log" Dec 02 11:20:28 crc kubenswrapper[4711]: I1202 11:20:28.826729 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v98q2_8284f010-fa2e-45fd-aa0f-46958a91102b/manager/0.log" Dec 02 11:20:28 crc kubenswrapper[4711]: I1202 11:20:28.861982 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-bqhpj_59853ec3-31ef-402d-8f5f-c12528b688f0/kube-rbac-proxy/0.log" Dec 02 11:20:28 crc kubenswrapper[4711]: I1202 11:20:28.965718 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-bqhpj_59853ec3-31ef-402d-8f5f-c12528b688f0/manager/0.log" Dec 02 11:20:29 crc kubenswrapper[4711]: I1202 11:20:29.054169 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-tnkm7_90b53574-c0b1-4bc6-ba22-238abb3c5b32/kube-rbac-proxy/0.log" Dec 02 11:20:29 crc kubenswrapper[4711]: I1202 11:20:29.251896 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-cwr6k_102348ad-5257-4114-acd6-e0e6c60a3c2b/kube-rbac-proxy/0.log" Dec 02 11:20:29 crc kubenswrapper[4711]: I1202 11:20:29.318276 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-tnkm7_90b53574-c0b1-4bc6-ba22-238abb3c5b32/manager/0.log" Dec 02 11:20:29 crc kubenswrapper[4711]: I1202 11:20:29.319266 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-cwr6k_102348ad-5257-4114-acd6-e0e6c60a3c2b/manager/0.log" Dec 02 11:20:29 crc kubenswrapper[4711]: I1202 11:20:29.438941 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-xkq8d_0b12ad88-acba-4d9f-82ac-f59d3ca57ac8/kube-rbac-proxy/0.log" Dec 02 11:20:29 crc kubenswrapper[4711]: I1202 11:20:29.541009 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-xkq8d_0b12ad88-acba-4d9f-82ac-f59d3ca57ac8/manager/0.log" Dec 02 11:20:29 crc kubenswrapper[4711]: I1202 11:20:29.663520 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-5nkxr_9d8cab18-532c-45c8-ba21-6f3bee02c722/kube-rbac-proxy/0.log" Dec 02 11:20:29 crc kubenswrapper[4711]: I1202 11:20:29.668624 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-5nkxr_9d8cab18-532c-45c8-ba21-6f3bee02c722/manager/0.log" Dec 02 11:20:29 crc kubenswrapper[4711]: I1202 11:20:29.732776 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-lsgwm_7e3c4c79-5009-40f8-80f9-0d30bf57cc5a/kube-rbac-proxy/0.log" Dec 02 11:20:29 crc kubenswrapper[4711]: I1202 11:20:29.944762 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-lsgwm_7e3c4c79-5009-40f8-80f9-0d30bf57cc5a/manager/0.log" Dec 02 11:20:29 crc kubenswrapper[4711]: I1202 11:20:29.953887 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-99nd2_1f5bd4c4-1262-47a2-94fb-bce66ebe7929/kube-rbac-proxy/0.log" Dec 02 11:20:30 crc kubenswrapper[4711]: I1202 11:20:30.012853 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-99nd2_1f5bd4c4-1262-47a2-94fb-bce66ebe7929/manager/0.log" Dec 02 11:20:30 crc kubenswrapper[4711]: I1202 11:20:30.122361 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-wdt5k_68951454-b246-49ab-b604-a62c48e0b2ea/kube-rbac-proxy/0.log" Dec 02 11:20:30 crc kubenswrapper[4711]: I1202 11:20:30.206824 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-wdt5k_68951454-b246-49ab-b604-a62c48e0b2ea/manager/0.log" Dec 02 11:20:30 crc kubenswrapper[4711]: I1202 11:20:30.293131 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-24pj6_f4109dad-388a-493d-b026-6cd10b9f76dd/manager/0.log" Dec 02 11:20:30 crc kubenswrapper[4711]: I1202 11:20:30.339119 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-24pj6_f4109dad-388a-493d-b026-6cd10b9f76dd/kube-rbac-proxy/0.log" Dec 02 11:20:30 crc kubenswrapper[4711]: I1202 11:20:30.425925 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs_6ef2b37f-78be-4a19-9d1b-b7d982032aab/kube-rbac-proxy/0.log" Dec 02 11:20:30 crc kubenswrapper[4711]: I1202 11:20:30.549220 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd42z4xs_6ef2b37f-78be-4a19-9d1b-b7d982032aab/manager/0.log" Dec 02 11:20:30 crc kubenswrapper[4711]: I1202 11:20:30.987769 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7dwdr_1d175158-9d08-4b61-87b8-c9054e78d6aa/registry-server/0.log" Dec 02 11:20:31 crc kubenswrapper[4711]: I1202 11:20:31.006797 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-d77745b8c-lf4vw_f105bf88-39ba-4e14-8741-1c3a0d759f63/operator/0.log" Dec 02 11:20:31 crc kubenswrapper[4711]: I1202 11:20:31.221252 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-l7s28_0872909b-ee36-482c-a6d7-f6d7ee6cc5ff/kube-rbac-proxy/0.log" Dec 02 11:20:31 crc kubenswrapper[4711]: I1202 11:20:31.247452 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-mhr7r_03d9d400-b25b-4ac4-bad3-55afbae399e4/kube-rbac-proxy/0.log" Dec 02 11:20:31 crc kubenswrapper[4711]: I1202 11:20:31.311123 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-l7s28_0872909b-ee36-482c-a6d7-f6d7ee6cc5ff/manager/0.log" Dec 02 11:20:31 crc kubenswrapper[4711]: I1202 11:20:31.481933 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-mhr7r_03d9d400-b25b-4ac4-bad3-55afbae399e4/manager/0.log" Dec 02 11:20:31 crc kubenswrapper[4711]: I1202 11:20:31.547249 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-k8d2c_2c9ae2aa-9390-409b-b50f-61295577580a/operator/0.log" Dec 02 11:20:31 crc kubenswrapper[4711]: I1202 11:20:31.657684 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-xdfmz_7f7481f9-19ef-4b29-95ef-043c7306f5cc/kube-rbac-proxy/0.log" Dec 02 11:20:31 crc kubenswrapper[4711]: I1202 11:20:31.679361 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-xdfmz_7f7481f9-19ef-4b29-95ef-043c7306f5cc/manager/0.log" Dec 02 11:20:31 crc kubenswrapper[4711]: I1202 11:20:31.743151 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-999cf8558-p99s8_4551cf35-cc78-43c0-a468-2e6518e336ff/kube-rbac-proxy/0.log" Dec 02 11:20:31 crc kubenswrapper[4711]: I1202 11:20:31.912141 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-999cf8558-p99s8_4551cf35-cc78-43c0-a468-2e6518e336ff/manager/0.log" Dec 02 11:20:32 crc kubenswrapper[4711]: I1202 11:20:32.026525 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8f7469895-dzfgg_15eaa14e-a3cd-4e68-8531-741ae62b9d58/manager/0.log" Dec 02 11:20:32 crc kubenswrapper[4711]: I1202 11:20:32.160131 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-fj96f_01685621-3c95-4091-a03a-de8d25c67efd/manager/0.log" Dec 02 11:20:32 crc kubenswrapper[4711]: I1202 11:20:32.168016 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-fj96f_01685621-3c95-4091-a03a-de8d25c67efd/kube-rbac-proxy/0.log" Dec 02 11:20:32 crc kubenswrapper[4711]: I1202 11:20:32.230609 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-7hlb8_13bbf4f3-8a73-45b8-80f5-52907db710c0/kube-rbac-proxy/0.log" Dec 02 11:20:32 crc kubenswrapper[4711]: I1202 11:20:32.325432 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-7hlb8_13bbf4f3-8a73-45b8-80f5-52907db710c0/manager/0.log" Dec 02 11:20:55 crc kubenswrapper[4711]: I1202 11:20:55.440856 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7kmjp_f6b3df69-76ea-4424-8d50-b2646cf2cd0e/control-plane-machine-set-operator/0.log" Dec 02 11:20:55 crc kubenswrapper[4711]: I1202 11:20:55.624046 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5n6jf_4d5f3363-25e9-4f5b-94ed-843a17d17997/kube-rbac-proxy/0.log" Dec 02 11:20:55 crc kubenswrapper[4711]: I1202 11:20:55.625137 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5n6jf_4d5f3363-25e9-4f5b-94ed-843a17d17997/machine-api-operator/0.log" Dec 02 11:21:10 crc kubenswrapper[4711]: I1202 11:21:10.628614 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-rvx4g_1deeebfb-423b-4a73-a76a-da43ae5dd8a9/cert-manager-controller/0.log" Dec 02 11:21:10 crc kubenswrapper[4711]: I1202 11:21:10.761988 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-mht85_9f001212-3824-41ea-a836-63c46277f629/cert-manager-cainjector/0.log" Dec 02 11:21:10 crc kubenswrapper[4711]: I1202 11:21:10.844901 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-h2pxz_beca3d6d-1017-4654-a1e1-6539558badf4/cert-manager-webhook/0.log" Dec 02 11:21:24 crc kubenswrapper[4711]: I1202 11:21:24.071740 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-b6vc9_4f84ccb3-491d-4453-aaab-89e33441a3e5/nmstate-console-plugin/0.log" Dec 02 11:21:24 crc kubenswrapper[4711]: I1202 11:21:24.297988 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-6ndb5_e515d3a0-2428-4629-833b-f23af0d11b10/nmstate-handler/0.log" Dec 02 11:21:24 crc kubenswrapper[4711]: I1202 11:21:24.325479 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ddk76_2fa89fd1-9149-414e-8214-c1bbb1563330/kube-rbac-proxy/0.log" Dec 02 11:21:24 crc kubenswrapper[4711]: I1202 11:21:24.421207 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ddk76_2fa89fd1-9149-414e-8214-c1bbb1563330/nmstate-metrics/0.log" Dec 02 11:21:24 crc kubenswrapper[4711]: I1202 11:21:24.632969 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-qz4ft_5b290675-5c06-445c-a50b-34ac2ba80718/nmstate-webhook/0.log" Dec 02 11:21:24 crc kubenswrapper[4711]: I1202 11:21:24.667838 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-t599q_edf95574-1178-4d62-b5b2-7dd68fce39da/nmstate-operator/0.log" Dec 02 11:21:41 crc kubenswrapper[4711]: I1202 11:21:41.554250 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-z96st_50c9eab8-843d-46f0-8af8-85bedeb5c0e9/kube-rbac-proxy/0.log" Dec 02 11:21:41 crc kubenswrapper[4711]: I1202 11:21:41.618003 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-z96st_50c9eab8-843d-46f0-8af8-85bedeb5c0e9/controller/0.log" Dec 02 11:21:41 crc kubenswrapper[4711]: I1202 11:21:41.753965 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-frr-files/0.log" Dec 02 11:21:41 crc kubenswrapper[4711]: I1202 11:21:41.947235 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-frr-files/0.log" Dec 02 11:21:41 crc kubenswrapper[4711]: I1202 11:21:41.983997 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-reloader/0.log" Dec 02 11:21:41 crc kubenswrapper[4711]: I1202 11:21:41.992030 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-reloader/0.log" Dec 02 11:21:42 crc kubenswrapper[4711]: I1202 11:21:42.024935 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-metrics/0.log" Dec 02 11:21:42 crc kubenswrapper[4711]: I1202 11:21:42.472532 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-frr-files/0.log" Dec 02 11:21:42 crc kubenswrapper[4711]: I1202 11:21:42.550511 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-metrics/0.log" Dec 02 11:21:42 crc kubenswrapper[4711]: I1202 11:21:42.550911 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-metrics/0.log" Dec 02 11:21:42 crc kubenswrapper[4711]: I1202 11:21:42.558413 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-reloader/0.log" Dec 02 11:21:42 crc kubenswrapper[4711]: I1202 11:21:42.765926 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-reloader/0.log" Dec 02 11:21:42 crc kubenswrapper[4711]: I1202 11:21:42.768785 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-frr-files/0.log" Dec 02 11:21:42 crc kubenswrapper[4711]: I1202 11:21:42.795635 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/controller/0.log" Dec 02 11:21:42 crc kubenswrapper[4711]: I1202 11:21:42.865764 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/cp-metrics/0.log" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.009900 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/frr-metrics/0.log" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.010279 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/kube-rbac-proxy/0.log" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.044976 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l7cbs"] Dec 02 11:21:43 crc kubenswrapper[4711]: E1202 11:21:43.050027 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93c36ae-2def-4586-96e5-de52675d79a7" containerName="extract-utilities" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.050072 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93c36ae-2def-4586-96e5-de52675d79a7" containerName="extract-utilities" Dec 02 11:21:43 crc kubenswrapper[4711]: E1202 11:21:43.050097 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93c36ae-2def-4586-96e5-de52675d79a7" containerName="extract-content" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.050105 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93c36ae-2def-4586-96e5-de52675d79a7" containerName="extract-content" Dec 02 11:21:43 crc kubenswrapper[4711]: E1202 11:21:43.050132 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93c36ae-2def-4586-96e5-de52675d79a7" containerName="registry-server" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.050140 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93c36ae-2def-4586-96e5-de52675d79a7" containerName="registry-server" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.050401 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93c36ae-2def-4586-96e5-de52675d79a7" containerName="registry-server" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.052028 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.064989 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l7cbs"] Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.121229 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/kube-rbac-proxy-frr/0.log" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.144753 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b13bc6-46b2-431d-8ebe-5b998914ba57-utilities\") pod \"certified-operators-l7cbs\" (UID: \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\") " pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.144830 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5m2h\" (UniqueName: \"kubernetes.io/projected/e6b13bc6-46b2-431d-8ebe-5b998914ba57-kube-api-access-p5m2h\") pod \"certified-operators-l7cbs\" (UID: \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\") " pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.145214 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b13bc6-46b2-431d-8ebe-5b998914ba57-catalog-content\") pod \"certified-operators-l7cbs\" (UID: \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\") " pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.247042 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b13bc6-46b2-431d-8ebe-5b998914ba57-catalog-content\") pod \"certified-operators-l7cbs\" (UID: \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\") " pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.247165 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b13bc6-46b2-431d-8ebe-5b998914ba57-utilities\") pod \"certified-operators-l7cbs\" (UID: \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\") " pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.247204 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5m2h\" (UniqueName: \"kubernetes.io/projected/e6b13bc6-46b2-431d-8ebe-5b998914ba57-kube-api-access-p5m2h\") pod \"certified-operators-l7cbs\" (UID: \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\") " pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.247600 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b13bc6-46b2-431d-8ebe-5b998914ba57-catalog-content\") pod \"certified-operators-l7cbs\" (UID: \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\") " pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.247702 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b13bc6-46b2-431d-8ebe-5b998914ba57-utilities\") pod \"certified-operators-l7cbs\" (UID: \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\") " pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.297414 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5m2h\" (UniqueName: \"kubernetes.io/projected/e6b13bc6-46b2-431d-8ebe-5b998914ba57-kube-api-access-p5m2h\") pod \"certified-operators-l7cbs\" (UID: \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\") " pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.348431 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/reloader/0.log" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.365668 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-t2846_08426bfc-3b12-4f6a-af6e-83b3bb4bf5a0/frr-k8s-webhook-server/0.log" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.372597 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:43 crc kubenswrapper[4711]: I1202 11:21:43.919992 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5c8d4f74b4-pmd42_b61b9ed6-c590-43f9-b029-82f457a65986/manager/0.log" Dec 02 11:21:44 crc kubenswrapper[4711]: I1202 11:21:44.040570 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l7cbs"] Dec 02 11:21:44 crc kubenswrapper[4711]: I1202 11:21:44.248744 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gxh9q_e6a8173c-9360-4880-98f8-c314de0da129/frr/0.log" Dec 02 11:21:44 crc kubenswrapper[4711]: I1202 11:21:44.431511 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-575f6f4cd9-2pkhc_02ce4661-516f-4d17-b5b8-69958d4c4ee8/webhook-server/0.log" Dec 02 11:21:44 crc kubenswrapper[4711]: I1202 11:21:44.440294 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fjkgx_d6e279b9-33a8-48d9-9442-be75926b530c/kube-rbac-proxy/0.log" Dec 02 11:21:44 crc kubenswrapper[4711]: I1202 11:21:44.542670 4711 generic.go:334] "Generic (PLEG): container finished" podID="e6b13bc6-46b2-431d-8ebe-5b998914ba57" containerID="c28ec6231b29b653c9ec3e641962060d09cce0f5ab0d8d40c37ed5ea6186a7f9" exitCode=0 Dec 02 11:21:44 crc kubenswrapper[4711]: I1202 11:21:44.542734 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7cbs" event={"ID":"e6b13bc6-46b2-431d-8ebe-5b998914ba57","Type":"ContainerDied","Data":"c28ec6231b29b653c9ec3e641962060d09cce0f5ab0d8d40c37ed5ea6186a7f9"} Dec 02 11:21:44 crc kubenswrapper[4711]: I1202 11:21:44.543001 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7cbs" event={"ID":"e6b13bc6-46b2-431d-8ebe-5b998914ba57","Type":"ContainerStarted","Data":"446cf02789d415ff2cb6209b06489ca569c4b778656807a4509bdcaf22bb273e"} Dec 02 11:21:44 crc kubenswrapper[4711]: I1202 11:21:44.702528 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fjkgx_d6e279b9-33a8-48d9-9442-be75926b530c/speaker/0.log" Dec 02 11:21:47 crc kubenswrapper[4711]: I1202 11:21:47.572191 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7cbs" event={"ID":"e6b13bc6-46b2-431d-8ebe-5b998914ba57","Type":"ContainerStarted","Data":"3925c641037e4a34be76018591517962b0228571e663ee9fe01cc7cb2d14ae65"} Dec 02 11:21:48 crc kubenswrapper[4711]: I1202 11:21:48.584873 4711 generic.go:334] "Generic (PLEG): container finished" podID="e6b13bc6-46b2-431d-8ebe-5b998914ba57" containerID="3925c641037e4a34be76018591517962b0228571e663ee9fe01cc7cb2d14ae65" exitCode=0 Dec 02 11:21:48 crc kubenswrapper[4711]: I1202 11:21:48.584941 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7cbs" event={"ID":"e6b13bc6-46b2-431d-8ebe-5b998914ba57","Type":"ContainerDied","Data":"3925c641037e4a34be76018591517962b0228571e663ee9fe01cc7cb2d14ae65"} Dec 02 11:21:48 crc kubenswrapper[4711]: I1202 11:21:48.588473 4711 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:21:50 crc kubenswrapper[4711]: I1202 11:21:50.616204 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7cbs" event={"ID":"e6b13bc6-46b2-431d-8ebe-5b998914ba57","Type":"ContainerStarted","Data":"c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d"} Dec 02 11:21:50 crc kubenswrapper[4711]: I1202 11:21:50.641091 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l7cbs" podStartSLOduration=2.694625072 podStartE2EDuration="7.641057247s" podCreationTimestamp="2025-12-02 11:21:43 +0000 UTC" firstStartedPulling="2025-12-02 11:21:44.544284329 +0000 UTC m=+4094.253650776" lastFinishedPulling="2025-12-02 11:21:49.490716504 +0000 UTC m=+4099.200082951" observedRunningTime="2025-12-02 11:21:50.633588945 +0000 UTC m=+4100.342955402" watchObservedRunningTime="2025-12-02 11:21:50.641057247 +0000 UTC m=+4100.350423704" Dec 02 11:21:52 crc kubenswrapper[4711]: I1202 11:21:52.585907 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:21:52 crc kubenswrapper[4711]: I1202 11:21:52.586348 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:21:53 crc kubenswrapper[4711]: I1202 11:21:53.373763 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:53 crc kubenswrapper[4711]: I1202 11:21:53.373840 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:53 crc kubenswrapper[4711]: I1202 11:21:53.861421 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:21:57 crc kubenswrapper[4711]: I1202 11:21:57.786358 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/util/0.log" Dec 02 11:21:57 crc kubenswrapper[4711]: I1202 11:21:57.946254 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/pull/0.log" Dec 02 11:21:57 crc kubenswrapper[4711]: I1202 11:21:57.978453 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/pull/0.log" Dec 02 11:21:57 crc kubenswrapper[4711]: I1202 11:21:57.992765 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/util/0.log" Dec 02 11:21:58 crc kubenswrapper[4711]: I1202 11:21:58.188138 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/util/0.log" Dec 02 11:21:58 crc kubenswrapper[4711]: I1202 11:21:58.196501 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/pull/0.log" Dec 02 11:21:58 crc kubenswrapper[4711]: I1202 11:21:58.196696 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmmxb9_33ec735d-946b-49d3-b1a0-4cb8d263647b/extract/0.log" Dec 02 11:21:58 crc kubenswrapper[4711]: I1202 11:21:58.369508 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/util/0.log" Dec 02 11:21:58 crc kubenswrapper[4711]: I1202 11:21:58.546303 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/util/0.log" Dec 02 11:21:58 crc kubenswrapper[4711]: I1202 11:21:58.574300 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/pull/0.log" Dec 02 11:21:58 crc kubenswrapper[4711]: I1202 11:21:58.578540 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/pull/0.log" Dec 02 11:21:58 crc kubenswrapper[4711]: I1202 11:21:58.798979 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/extract/0.log" Dec 02 11:21:58 crc kubenswrapper[4711]: I1202 11:21:58.835153 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/util/0.log" Dec 02 11:21:58 crc kubenswrapper[4711]: I1202 11:21:58.939606 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83vsgkz_2cda2fc5-6d4e-4770-8e98-2139ee2cee6c/pull/0.log" Dec 02 11:21:59 crc kubenswrapper[4711]: I1202 11:21:59.022426 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/extract-utilities/0.log" Dec 02 11:21:59 crc kubenswrapper[4711]: I1202 11:21:59.205397 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/extract-content/0.log" Dec 02 11:21:59 crc kubenswrapper[4711]: I1202 11:21:59.207267 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/extract-content/0.log" Dec 02 11:21:59 crc kubenswrapper[4711]: I1202 11:21:59.216596 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/extract-utilities/0.log" Dec 02 11:21:59 crc kubenswrapper[4711]: I1202 11:21:59.452931 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/extract-utilities/0.log" Dec 02 11:21:59 crc kubenswrapper[4711]: I1202 11:21:59.479198 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/extract-content/0.log" Dec 02 11:21:59 crc kubenswrapper[4711]: I1202 11:21:59.733303 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l7cbs_e6b13bc6-46b2-431d-8ebe-5b998914ba57/extract-utilities/0.log" Dec 02 11:21:59 crc kubenswrapper[4711]: I1202 11:21:59.878445 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l7cbs_e6b13bc6-46b2-431d-8ebe-5b998914ba57/extract-content/0.log" Dec 02 11:21:59 crc kubenswrapper[4711]: I1202 11:21:59.936553 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l7cbs_e6b13bc6-46b2-431d-8ebe-5b998914ba57/extract-content/0.log" Dec 02 11:21:59 crc kubenswrapper[4711]: I1202 11:21:59.959021 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l7cbs_e6b13bc6-46b2-431d-8ebe-5b998914ba57/extract-utilities/0.log" Dec 02 11:21:59 crc kubenswrapper[4711]: I1202 11:21:59.964885 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tn2s_61cb2319-0773-4cab-9057-ea1631ad72b2/registry-server/0.log" Dec 02 11:22:00 crc kubenswrapper[4711]: I1202 11:22:00.169209 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l7cbs_e6b13bc6-46b2-431d-8ebe-5b998914ba57/registry-server/0.log" Dec 02 11:22:00 crc kubenswrapper[4711]: I1202 11:22:00.172267 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l7cbs_e6b13bc6-46b2-431d-8ebe-5b998914ba57/extract-utilities/0.log" Dec 02 11:22:00 crc kubenswrapper[4711]: I1202 11:22:00.220372 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l7cbs_e6b13bc6-46b2-431d-8ebe-5b998914ba57/extract-content/0.log" Dec 02 11:22:00 crc kubenswrapper[4711]: I1202 11:22:00.367727 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/extract-utilities/0.log" Dec 02 11:22:00 crc kubenswrapper[4711]: I1202 11:22:00.542127 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/extract-content/0.log" Dec 02 11:22:00 crc kubenswrapper[4711]: I1202 11:22:00.556790 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/extract-utilities/0.log" Dec 02 11:22:00 crc kubenswrapper[4711]: I1202 11:22:00.571660 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/extract-content/0.log" Dec 02 11:22:00 crc kubenswrapper[4711]: I1202 11:22:00.897623 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/extract-utilities/0.log" Dec 02 11:22:00 crc kubenswrapper[4711]: I1202 11:22:00.915607 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/extract-content/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.092494 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-z4fwd_bc27e106-dc06-4326-9cc4-99ca9b5206bb/marketplace-operator/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.212807 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qmdv4_40712fa7-bc4e-4062-b466-f8fc0af28d39/registry-server/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.254914 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/extract-utilities/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.453241 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/extract-utilities/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.459537 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/extract-content/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.475160 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/extract-content/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.667785 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/extract-utilities/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.678160 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/extract-content/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.854511 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/extract-utilities/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.864718 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8w7v_e7d69893-d9ab-42f7-a505-472548cbe19d/registry-server/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.967306 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/extract-content/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.986867 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/extract-utilities/0.log" Dec 02 11:22:01 crc kubenswrapper[4711]: I1202 11:22:01.993507 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/extract-content/0.log" Dec 02 11:22:02 crc kubenswrapper[4711]: I1202 11:22:02.201340 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/extract-utilities/0.log" Dec 02 11:22:02 crc kubenswrapper[4711]: I1202 11:22:02.235555 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/extract-content/0.log" Dec 02 11:22:02 crc kubenswrapper[4711]: I1202 11:22:02.763059 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tln95_48cb272a-41a6-4371-b3c8-fe7d6e661ba2/registry-server/0.log" Dec 02 11:22:04 crc kubenswrapper[4711]: I1202 11:22:04.261566 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:22:04 crc kubenswrapper[4711]: I1202 11:22:04.326554 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l7cbs"] Dec 02 11:22:04 crc kubenswrapper[4711]: I1202 11:22:04.735129 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l7cbs" podUID="e6b13bc6-46b2-431d-8ebe-5b998914ba57" containerName="registry-server" containerID="cri-o://c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d" gracePeriod=2 Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.238846 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.378074 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5m2h\" (UniqueName: \"kubernetes.io/projected/e6b13bc6-46b2-431d-8ebe-5b998914ba57-kube-api-access-p5m2h\") pod \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\" (UID: \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\") " Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.378199 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b13bc6-46b2-431d-8ebe-5b998914ba57-catalog-content\") pod \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\" (UID: \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\") " Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.378228 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b13bc6-46b2-431d-8ebe-5b998914ba57-utilities\") pod \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\" (UID: \"e6b13bc6-46b2-431d-8ebe-5b998914ba57\") " Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.379547 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b13bc6-46b2-431d-8ebe-5b998914ba57-utilities" (OuterVolumeSpecName: "utilities") pod "e6b13bc6-46b2-431d-8ebe-5b998914ba57" (UID: "e6b13bc6-46b2-431d-8ebe-5b998914ba57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.393141 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b13bc6-46b2-431d-8ebe-5b998914ba57-kube-api-access-p5m2h" (OuterVolumeSpecName: "kube-api-access-p5m2h") pod "e6b13bc6-46b2-431d-8ebe-5b998914ba57" (UID: "e6b13bc6-46b2-431d-8ebe-5b998914ba57"). InnerVolumeSpecName "kube-api-access-p5m2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.429125 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b13bc6-46b2-431d-8ebe-5b998914ba57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6b13bc6-46b2-431d-8ebe-5b998914ba57" (UID: "e6b13bc6-46b2-431d-8ebe-5b998914ba57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.480018 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5m2h\" (UniqueName: \"kubernetes.io/projected/e6b13bc6-46b2-431d-8ebe-5b998914ba57-kube-api-access-p5m2h\") on node \"crc\" DevicePath \"\"" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.480254 4711 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b13bc6-46b2-431d-8ebe-5b998914ba57-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.480330 4711 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b13bc6-46b2-431d-8ebe-5b998914ba57-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.747479 4711 generic.go:334] "Generic (PLEG): container finished" podID="e6b13bc6-46b2-431d-8ebe-5b998914ba57" containerID="c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d" exitCode=0 Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.747531 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7cbs" event={"ID":"e6b13bc6-46b2-431d-8ebe-5b998914ba57","Type":"ContainerDied","Data":"c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d"} Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.747572 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7cbs" event={"ID":"e6b13bc6-46b2-431d-8ebe-5b998914ba57","Type":"ContainerDied","Data":"446cf02789d415ff2cb6209b06489ca569c4b778656807a4509bdcaf22bb273e"} Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.747581 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7cbs" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.747594 4711 scope.go:117] "RemoveContainer" containerID="c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.785781 4711 scope.go:117] "RemoveContainer" containerID="3925c641037e4a34be76018591517962b0228571e663ee9fe01cc7cb2d14ae65" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.795478 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l7cbs"] Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.804781 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l7cbs"] Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.937178 4711 scope.go:117] "RemoveContainer" containerID="c28ec6231b29b653c9ec3e641962060d09cce0f5ab0d8d40c37ed5ea6186a7f9" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.973502 4711 scope.go:117] "RemoveContainer" containerID="c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d" Dec 02 11:22:05 crc kubenswrapper[4711]: E1202 11:22:05.974334 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d\": container with ID starting with c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d not found: ID does not exist" containerID="c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.974400 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d"} err="failed to get container status \"c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d\": rpc error: code = NotFound desc = could not find container \"c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d\": container with ID starting with c34fea6abf583adf53608577cef978c94997556139e891a670b3aefd59f0634d not found: ID does not exist" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.974445 4711 scope.go:117] "RemoveContainer" containerID="3925c641037e4a34be76018591517962b0228571e663ee9fe01cc7cb2d14ae65" Dec 02 11:22:05 crc kubenswrapper[4711]: E1202 11:22:05.975447 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3925c641037e4a34be76018591517962b0228571e663ee9fe01cc7cb2d14ae65\": container with ID starting with 3925c641037e4a34be76018591517962b0228571e663ee9fe01cc7cb2d14ae65 not found: ID does not exist" containerID="3925c641037e4a34be76018591517962b0228571e663ee9fe01cc7cb2d14ae65" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.975528 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3925c641037e4a34be76018591517962b0228571e663ee9fe01cc7cb2d14ae65"} err="failed to get container status \"3925c641037e4a34be76018591517962b0228571e663ee9fe01cc7cb2d14ae65\": rpc error: code = NotFound desc = could not find container \"3925c641037e4a34be76018591517962b0228571e663ee9fe01cc7cb2d14ae65\": container with ID starting with 3925c641037e4a34be76018591517962b0228571e663ee9fe01cc7cb2d14ae65 not found: ID does not exist" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.975558 4711 scope.go:117] "RemoveContainer" containerID="c28ec6231b29b653c9ec3e641962060d09cce0f5ab0d8d40c37ed5ea6186a7f9" Dec 02 11:22:05 crc kubenswrapper[4711]: E1202 11:22:05.976177 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28ec6231b29b653c9ec3e641962060d09cce0f5ab0d8d40c37ed5ea6186a7f9\": container with ID starting with c28ec6231b29b653c9ec3e641962060d09cce0f5ab0d8d40c37ed5ea6186a7f9 not found: ID does not exist" containerID="c28ec6231b29b653c9ec3e641962060d09cce0f5ab0d8d40c37ed5ea6186a7f9" Dec 02 11:22:05 crc kubenswrapper[4711]: I1202 11:22:05.976215 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28ec6231b29b653c9ec3e641962060d09cce0f5ab0d8d40c37ed5ea6186a7f9"} err="failed to get container status \"c28ec6231b29b653c9ec3e641962060d09cce0f5ab0d8d40c37ed5ea6186a7f9\": rpc error: code = NotFound desc = could not find container \"c28ec6231b29b653c9ec3e641962060d09cce0f5ab0d8d40c37ed5ea6186a7f9\": container with ID starting with c28ec6231b29b653c9ec3e641962060d09cce0f5ab0d8d40c37ed5ea6186a7f9 not found: ID does not exist" Dec 02 11:22:07 crc kubenswrapper[4711]: I1202 11:22:07.089521 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b13bc6-46b2-431d-8ebe-5b998914ba57" path="/var/lib/kubelet/pods/e6b13bc6-46b2-431d-8ebe-5b998914ba57/volumes" Dec 02 11:22:22 crc kubenswrapper[4711]: I1202 11:22:22.586152 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:22:22 crc kubenswrapper[4711]: I1202 11:22:22.586886 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:22:52 crc kubenswrapper[4711]: I1202 11:22:52.586661 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:22:52 crc kubenswrapper[4711]: I1202 11:22:52.587488 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:22:52 crc kubenswrapper[4711]: I1202 11:22:52.587611 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 11:22:52 crc kubenswrapper[4711]: I1202 11:22:52.588784 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"291d751448235148339342bfa37f282b1f70919e0f764ee9dfb17ad5ee0636ca"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:22:52 crc kubenswrapper[4711]: I1202 11:22:52.588932 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://291d751448235148339342bfa37f282b1f70919e0f764ee9dfb17ad5ee0636ca" gracePeriod=600 Dec 02 11:22:53 crc kubenswrapper[4711]: I1202 11:22:53.283225 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="291d751448235148339342bfa37f282b1f70919e0f764ee9dfb17ad5ee0636ca" exitCode=0 Dec 02 11:22:53 crc kubenswrapper[4711]: I1202 11:22:53.283288 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"291d751448235148339342bfa37f282b1f70919e0f764ee9dfb17ad5ee0636ca"} Dec 02 11:22:53 crc kubenswrapper[4711]: I1202 11:22:53.283864 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerStarted","Data":"56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a"} Dec 02 11:22:53 crc kubenswrapper[4711]: I1202 11:22:53.283892 4711 scope.go:117] "RemoveContainer" containerID="6e2baf13172b88d9c53f92006f1ca98135d4228ebc3bedfea42ca32837e68530" Dec 02 11:23:41 crc kubenswrapper[4711]: I1202 11:23:41.912373 4711 generic.go:334] "Generic (PLEG): container finished" podID="cabdab0c-1ff8-4185-b7a7-3785a1eef79b" containerID="f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c" exitCode=0 Dec 02 11:23:41 crc kubenswrapper[4711]: I1202 11:23:41.912543 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr647/must-gather-bxxxc" event={"ID":"cabdab0c-1ff8-4185-b7a7-3785a1eef79b","Type":"ContainerDied","Data":"f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c"} Dec 02 11:23:41 crc kubenswrapper[4711]: I1202 11:23:41.916436 4711 scope.go:117] "RemoveContainer" containerID="f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c" Dec 02 11:23:42 crc kubenswrapper[4711]: I1202 11:23:42.288595 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kr647_must-gather-bxxxc_cabdab0c-1ff8-4185-b7a7-3785a1eef79b/gather/0.log" Dec 02 11:23:52 crc kubenswrapper[4711]: I1202 11:23:52.376517 4711 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kr647/must-gather-bxxxc"] Dec 02 11:23:52 crc kubenswrapper[4711]: I1202 11:23:52.379033 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kr647/must-gather-bxxxc" podUID="cabdab0c-1ff8-4185-b7a7-3785a1eef79b" containerName="copy" containerID="cri-o://a52b2a9e2bf3dd611833319ed54617b4141941334e9eb55a72f5d02c7fd93fa0" gracePeriod=2 Dec 02 11:23:52 crc kubenswrapper[4711]: I1202 11:23:52.391221 4711 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kr647/must-gather-bxxxc"] Dec 02 11:23:52 crc kubenswrapper[4711]: I1202 11:23:52.802509 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kr647_must-gather-bxxxc_cabdab0c-1ff8-4185-b7a7-3785a1eef79b/copy/0.log" Dec 02 11:23:52 crc kubenswrapper[4711]: I1202 11:23:52.803207 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/must-gather-bxxxc" Dec 02 11:23:52 crc kubenswrapper[4711]: I1202 11:23:52.933999 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-975vg\" (UniqueName: \"kubernetes.io/projected/cabdab0c-1ff8-4185-b7a7-3785a1eef79b-kube-api-access-975vg\") pod \"cabdab0c-1ff8-4185-b7a7-3785a1eef79b\" (UID: \"cabdab0c-1ff8-4185-b7a7-3785a1eef79b\") " Dec 02 11:23:52 crc kubenswrapper[4711]: I1202 11:23:52.934077 4711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cabdab0c-1ff8-4185-b7a7-3785a1eef79b-must-gather-output\") pod \"cabdab0c-1ff8-4185-b7a7-3785a1eef79b\" (UID: \"cabdab0c-1ff8-4185-b7a7-3785a1eef79b\") " Dec 02 11:23:52 crc kubenswrapper[4711]: I1202 11:23:52.943325 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cabdab0c-1ff8-4185-b7a7-3785a1eef79b-kube-api-access-975vg" (OuterVolumeSpecName: "kube-api-access-975vg") pod "cabdab0c-1ff8-4185-b7a7-3785a1eef79b" (UID: "cabdab0c-1ff8-4185-b7a7-3785a1eef79b"). InnerVolumeSpecName "kube-api-access-975vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:23:53 crc kubenswrapper[4711]: I1202 11:23:53.036663 4711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-975vg\" (UniqueName: \"kubernetes.io/projected/cabdab0c-1ff8-4185-b7a7-3785a1eef79b-kube-api-access-975vg\") on node \"crc\" DevicePath \"\"" Dec 02 11:23:53 crc kubenswrapper[4711]: I1202 11:23:53.055740 4711 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kr647_must-gather-bxxxc_cabdab0c-1ff8-4185-b7a7-3785a1eef79b/copy/0.log" Dec 02 11:23:53 crc kubenswrapper[4711]: I1202 11:23:53.056600 4711 generic.go:334] "Generic (PLEG): container finished" podID="cabdab0c-1ff8-4185-b7a7-3785a1eef79b" containerID="a52b2a9e2bf3dd611833319ed54617b4141941334e9eb55a72f5d02c7fd93fa0" exitCode=143 Dec 02 11:23:53 crc kubenswrapper[4711]: I1202 11:23:53.056669 4711 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr647/must-gather-bxxxc" Dec 02 11:23:53 crc kubenswrapper[4711]: I1202 11:23:53.056680 4711 scope.go:117] "RemoveContainer" containerID="a52b2a9e2bf3dd611833319ed54617b4141941334e9eb55a72f5d02c7fd93fa0" Dec 02 11:23:53 crc kubenswrapper[4711]: I1202 11:23:53.088875 4711 scope.go:117] "RemoveContainer" containerID="f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c" Dec 02 11:23:53 crc kubenswrapper[4711]: I1202 11:23:53.136491 4711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cabdab0c-1ff8-4185-b7a7-3785a1eef79b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cabdab0c-1ff8-4185-b7a7-3785a1eef79b" (UID: "cabdab0c-1ff8-4185-b7a7-3785a1eef79b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:23:53 crc kubenswrapper[4711]: I1202 11:23:53.140735 4711 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cabdab0c-1ff8-4185-b7a7-3785a1eef79b-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 02 11:23:53 crc kubenswrapper[4711]: I1202 11:23:53.164987 4711 scope.go:117] "RemoveContainer" containerID="a52b2a9e2bf3dd611833319ed54617b4141941334e9eb55a72f5d02c7fd93fa0" Dec 02 11:23:53 crc kubenswrapper[4711]: E1202 11:23:53.165588 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52b2a9e2bf3dd611833319ed54617b4141941334e9eb55a72f5d02c7fd93fa0\": container with ID starting with a52b2a9e2bf3dd611833319ed54617b4141941334e9eb55a72f5d02c7fd93fa0 not found: ID does not exist" containerID="a52b2a9e2bf3dd611833319ed54617b4141941334e9eb55a72f5d02c7fd93fa0" Dec 02 11:23:53 crc kubenswrapper[4711]: I1202 11:23:53.165665 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52b2a9e2bf3dd611833319ed54617b4141941334e9eb55a72f5d02c7fd93fa0"} err="failed to get container status \"a52b2a9e2bf3dd611833319ed54617b4141941334e9eb55a72f5d02c7fd93fa0\": rpc error: code = NotFound desc = could not find container \"a52b2a9e2bf3dd611833319ed54617b4141941334e9eb55a72f5d02c7fd93fa0\": container with ID starting with a52b2a9e2bf3dd611833319ed54617b4141941334e9eb55a72f5d02c7fd93fa0 not found: ID does not exist" Dec 02 11:23:53 crc kubenswrapper[4711]: I1202 11:23:53.165715 4711 scope.go:117] "RemoveContainer" containerID="f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c" Dec 02 11:23:53 crc kubenswrapper[4711]: E1202 11:23:53.166146 4711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c\": container with ID starting with f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c not found: ID does not exist" containerID="f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c" Dec 02 11:23:53 crc kubenswrapper[4711]: I1202 11:23:53.166211 4711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c"} err="failed to get container status \"f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c\": rpc error: code = NotFound desc = could not find container \"f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c\": container with ID starting with f747db5493cbcfc3f8b67c4e8cbb3b862916ff915400317171f544d78d6ff42c not found: ID does not exist" Dec 02 11:23:55 crc kubenswrapper[4711]: I1202 11:23:55.097241 4711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cabdab0c-1ff8-4185-b7a7-3785a1eef79b" path="/var/lib/kubelet/pods/cabdab0c-1ff8-4185-b7a7-3785a1eef79b/volumes" Dec 02 11:24:52 crc kubenswrapper[4711]: I1202 11:24:52.586331 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:24:52 crc kubenswrapper[4711]: I1202 11:24:52.587122 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:25:22 crc kubenswrapper[4711]: I1202 11:25:22.585764 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:25:22 crc kubenswrapper[4711]: I1202 11:25:22.586537 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:25:38 crc kubenswrapper[4711]: I1202 11:25:38.159388 4711 scope.go:117] "RemoveContainer" containerID="4f001e835c1b61f480a53380da1aa11173a3759b2a010432fcd52129746d8f07" Dec 02 11:25:52 crc kubenswrapper[4711]: I1202 11:25:52.585385 4711 patch_prober.go:28] interesting pod/machine-config-daemon-9b9cn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:25:52 crc kubenswrapper[4711]: I1202 11:25:52.585986 4711 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:25:52 crc kubenswrapper[4711]: I1202 11:25:52.586077 4711 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" Dec 02 11:25:52 crc kubenswrapper[4711]: I1202 11:25:52.586815 4711 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a"} pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:25:52 crc kubenswrapper[4711]: I1202 11:25:52.586867 4711 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerName="machine-config-daemon" containerID="cri-o://56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a" gracePeriod=600 Dec 02 11:25:53 crc kubenswrapper[4711]: E1202 11:25:53.340747 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:25:53 crc kubenswrapper[4711]: I1202 11:25:53.494174 4711 generic.go:334] "Generic (PLEG): container finished" podID="0641e884-c845-499c-9ce6-0c4f1a893b5a" containerID="56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a" exitCode=0 Dec 02 11:25:53 crc kubenswrapper[4711]: I1202 11:25:53.494216 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" event={"ID":"0641e884-c845-499c-9ce6-0c4f1a893b5a","Type":"ContainerDied","Data":"56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a"} Dec 02 11:25:53 crc kubenswrapper[4711]: I1202 11:25:53.494250 4711 scope.go:117] "RemoveContainer" containerID="291d751448235148339342bfa37f282b1f70919e0f764ee9dfb17ad5ee0636ca" Dec 02 11:25:53 crc kubenswrapper[4711]: I1202 11:25:53.495114 4711 scope.go:117] "RemoveContainer" containerID="56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a" Dec 02 11:25:53 crc kubenswrapper[4711]: E1202 11:25:53.495560 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:26:07 crc kubenswrapper[4711]: I1202 11:26:07.078411 4711 scope.go:117] "RemoveContainer" containerID="56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a" Dec 02 11:26:07 crc kubenswrapper[4711]: E1202 11:26:07.079077 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:26:19 crc kubenswrapper[4711]: I1202 11:26:19.078475 4711 scope.go:117] "RemoveContainer" containerID="56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a" Dec 02 11:26:19 crc kubenswrapper[4711]: E1202 11:26:19.079556 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:26:33 crc kubenswrapper[4711]: I1202 11:26:33.079707 4711 scope.go:117] "RemoveContainer" containerID="56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a" Dec 02 11:26:33 crc kubenswrapper[4711]: E1202 11:26:33.080547 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:26:48 crc kubenswrapper[4711]: I1202 11:26:48.078216 4711 scope.go:117] "RemoveContainer" containerID="56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a" Dec 02 11:26:48 crc kubenswrapper[4711]: E1202 11:26:48.079123 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:27:00 crc kubenswrapper[4711]: I1202 11:27:00.078558 4711 scope.go:117] "RemoveContainer" containerID="56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a" Dec 02 11:27:00 crc kubenswrapper[4711]: E1202 11:27:00.080395 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:27:12 crc kubenswrapper[4711]: I1202 11:27:12.079760 4711 scope.go:117] "RemoveContainer" containerID="56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a" Dec 02 11:27:12 crc kubenswrapper[4711]: E1202 11:27:12.081344 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.617478 4711 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s4vw2"] Dec 02 11:27:22 crc kubenswrapper[4711]: E1202 11:27:22.618817 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabdab0c-1ff8-4185-b7a7-3785a1eef79b" containerName="gather" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.618854 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabdab0c-1ff8-4185-b7a7-3785a1eef79b" containerName="gather" Dec 02 11:27:22 crc kubenswrapper[4711]: E1202 11:27:22.618874 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b13bc6-46b2-431d-8ebe-5b998914ba57" containerName="extract-content" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.618885 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b13bc6-46b2-431d-8ebe-5b998914ba57" containerName="extract-content" Dec 02 11:27:22 crc kubenswrapper[4711]: E1202 11:27:22.618914 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabdab0c-1ff8-4185-b7a7-3785a1eef79b" containerName="copy" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.618926 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabdab0c-1ff8-4185-b7a7-3785a1eef79b" containerName="copy" Dec 02 11:27:22 crc kubenswrapper[4711]: E1202 11:27:22.619018 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b13bc6-46b2-431d-8ebe-5b998914ba57" containerName="registry-server" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.619033 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b13bc6-46b2-431d-8ebe-5b998914ba57" containerName="registry-server" Dec 02 11:27:22 crc kubenswrapper[4711]: E1202 11:27:22.619052 4711 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b13bc6-46b2-431d-8ebe-5b998914ba57" containerName="extract-utilities" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.619064 4711 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b13bc6-46b2-431d-8ebe-5b998914ba57" containerName="extract-utilities" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.619404 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="cabdab0c-1ff8-4185-b7a7-3785a1eef79b" containerName="copy" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.619438 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b13bc6-46b2-431d-8ebe-5b998914ba57" containerName="registry-server" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.619461 4711 memory_manager.go:354] "RemoveStaleState removing state" podUID="cabdab0c-1ff8-4185-b7a7-3785a1eef79b" containerName="gather" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.623335 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.629593 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4vw2"] Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.720329 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07fc4a92-1a34-4792-b10f-293fdde1ae41-utilities\") pod \"redhat-operators-s4vw2\" (UID: \"07fc4a92-1a34-4792-b10f-293fdde1ae41\") " pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.720670 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgwd7\" (UniqueName: \"kubernetes.io/projected/07fc4a92-1a34-4792-b10f-293fdde1ae41-kube-api-access-hgwd7\") pod \"redhat-operators-s4vw2\" (UID: \"07fc4a92-1a34-4792-b10f-293fdde1ae41\") " pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.720712 4711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07fc4a92-1a34-4792-b10f-293fdde1ae41-catalog-content\") pod \"redhat-operators-s4vw2\" (UID: \"07fc4a92-1a34-4792-b10f-293fdde1ae41\") " pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.822204 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07fc4a92-1a34-4792-b10f-293fdde1ae41-utilities\") pod \"redhat-operators-s4vw2\" (UID: \"07fc4a92-1a34-4792-b10f-293fdde1ae41\") " pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.822350 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgwd7\" (UniqueName: \"kubernetes.io/projected/07fc4a92-1a34-4792-b10f-293fdde1ae41-kube-api-access-hgwd7\") pod \"redhat-operators-s4vw2\" (UID: \"07fc4a92-1a34-4792-b10f-293fdde1ae41\") " pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.822413 4711 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07fc4a92-1a34-4792-b10f-293fdde1ae41-catalog-content\") pod \"redhat-operators-s4vw2\" (UID: \"07fc4a92-1a34-4792-b10f-293fdde1ae41\") " pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.822907 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07fc4a92-1a34-4792-b10f-293fdde1ae41-utilities\") pod \"redhat-operators-s4vw2\" (UID: \"07fc4a92-1a34-4792-b10f-293fdde1ae41\") " pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.822921 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07fc4a92-1a34-4792-b10f-293fdde1ae41-catalog-content\") pod \"redhat-operators-s4vw2\" (UID: \"07fc4a92-1a34-4792-b10f-293fdde1ae41\") " pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.849910 4711 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgwd7\" (UniqueName: \"kubernetes.io/projected/07fc4a92-1a34-4792-b10f-293fdde1ae41-kube-api-access-hgwd7\") pod \"redhat-operators-s4vw2\" (UID: \"07fc4a92-1a34-4792-b10f-293fdde1ae41\") " pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:22 crc kubenswrapper[4711]: I1202 11:27:22.955232 4711 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:23 crc kubenswrapper[4711]: I1202 11:27:23.389518 4711 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4vw2"] Dec 02 11:27:23 crc kubenswrapper[4711]: W1202 11:27:23.396209 4711 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07fc4a92_1a34_4792_b10f_293fdde1ae41.slice/crio-1a227d871d8f30cad8d6a596e59cbce7f10289320203dbb9589c849a39d426ab WatchSource:0}: Error finding container 1a227d871d8f30cad8d6a596e59cbce7f10289320203dbb9589c849a39d426ab: Status 404 returned error can't find the container with id 1a227d871d8f30cad8d6a596e59cbce7f10289320203dbb9589c849a39d426ab Dec 02 11:27:23 crc kubenswrapper[4711]: I1202 11:27:23.516826 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4vw2" event={"ID":"07fc4a92-1a34-4792-b10f-293fdde1ae41","Type":"ContainerStarted","Data":"1a227d871d8f30cad8d6a596e59cbce7f10289320203dbb9589c849a39d426ab"} Dec 02 11:27:24 crc kubenswrapper[4711]: I1202 11:27:24.531913 4711 generic.go:334] "Generic (PLEG): container finished" podID="07fc4a92-1a34-4792-b10f-293fdde1ae41" containerID="11c22b07d9301d246d97fe87c2b5c1c80fa78a20131100f28615591db83aba98" exitCode=0 Dec 02 11:27:24 crc kubenswrapper[4711]: I1202 11:27:24.532536 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4vw2" event={"ID":"07fc4a92-1a34-4792-b10f-293fdde1ae41","Type":"ContainerDied","Data":"11c22b07d9301d246d97fe87c2b5c1c80fa78a20131100f28615591db83aba98"} Dec 02 11:27:24 crc kubenswrapper[4711]: I1202 11:27:24.535181 4711 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:27:26 crc kubenswrapper[4711]: I1202 11:27:26.078590 4711 scope.go:117] "RemoveContainer" containerID="56b61addd779659ab1b50356ea960817fdfadda9a4e526a73a39e0611f507f9a" Dec 02 11:27:26 crc kubenswrapper[4711]: E1202 11:27:26.079256 4711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9b9cn_openshift-machine-config-operator(0641e884-c845-499c-9ce6-0c4f1a893b5a)\"" pod="openshift-machine-config-operator/machine-config-daemon-9b9cn" podUID="0641e884-c845-499c-9ce6-0c4f1a893b5a" Dec 02 11:27:26 crc kubenswrapper[4711]: I1202 11:27:26.555198 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4vw2" event={"ID":"07fc4a92-1a34-4792-b10f-293fdde1ae41","Type":"ContainerStarted","Data":"2e3a8a41d7b2d210a5a483756bc94ca7c2871a87ab1fb9c74ff579e735427ea2"} Dec 02 11:27:27 crc kubenswrapper[4711]: I1202 11:27:27.576187 4711 generic.go:334] "Generic (PLEG): container finished" podID="07fc4a92-1a34-4792-b10f-293fdde1ae41" containerID="2e3a8a41d7b2d210a5a483756bc94ca7c2871a87ab1fb9c74ff579e735427ea2" exitCode=0 Dec 02 11:27:27 crc kubenswrapper[4711]: I1202 11:27:27.576311 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4vw2" event={"ID":"07fc4a92-1a34-4792-b10f-293fdde1ae41","Type":"ContainerDied","Data":"2e3a8a41d7b2d210a5a483756bc94ca7c2871a87ab1fb9c74ff579e735427ea2"} Dec 02 11:27:29 crc kubenswrapper[4711]: I1202 11:27:29.599183 4711 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4vw2" event={"ID":"07fc4a92-1a34-4792-b10f-293fdde1ae41","Type":"ContainerStarted","Data":"acdc5d287d0f75cc4c1b786fe36471d9dc33345a4fe060cd8c6029d96bbee5ea"} Dec 02 11:27:29 crc kubenswrapper[4711]: I1202 11:27:29.626926 4711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s4vw2" podStartSLOduration=3.961008663 podStartE2EDuration="7.626884168s" podCreationTimestamp="2025-12-02 11:27:22 +0000 UTC" firstStartedPulling="2025-12-02 11:27:24.534911177 +0000 UTC m=+4434.244277624" lastFinishedPulling="2025-12-02 11:27:28.200786682 +0000 UTC m=+4437.910153129" observedRunningTime="2025-12-02 11:27:29.617564528 +0000 UTC m=+4439.326930995" watchObservedRunningTime="2025-12-02 11:27:29.626884168 +0000 UTC m=+4439.336250625" Dec 02 11:27:32 crc kubenswrapper[4711]: I1202 11:27:32.955698 4711 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:32 crc kubenswrapper[4711]: I1202 11:27:32.956279 4711 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s4vw2" Dec 02 11:27:34 crc kubenswrapper[4711]: I1202 11:27:34.026585 4711 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s4vw2" podUID="07fc4a92-1a34-4792-b10f-293fdde1ae41" containerName="registry-server" probeResult="failure" output=< Dec 02 11:27:34 crc kubenswrapper[4711]: timeout: failed to connect service ":50051" within 1s Dec 02 11:27:34 crc kubenswrapper[4711]: >